This project is mirrored from https://:*****@github.com/Ensembl/ensembl-hive.git.
Pull mirroring updated .
- 17 Feb, 2005 1 commit
-
-
Jessica Severin authored
-
- 16 Feb, 2005 8 commits
-
-
Jessica Severin authored
is when there are lots of workers 'WORKING' so as to avoid them falling over each other. The 'WORKING' state only exists in the middle of a large run. When the last worker dies, the state is 'ALL_CLAIMED' so the sync on death will happen properly. As the last pile of workers die they will all do a synch, but that's OK since the system needs to be properly synched when the last one dies since there won't be anybody left to do it. Also added 10 minute check for if already 'SYNCHING' to deal with case if worker dies in the middle of 'SYNCHING'.
-
Jessica Severin authored
-
Jessica Severin authored
-
Jessica Severin authored
so to reduce the sychronization frequency.
-
Jessica Severin authored
-
Jessica Severin authored
call lower down isn't needed. Also needed to move the printing of the analysis_stats up higher to better display with the new printing order. Now -loop -analysis_stats looks right.
-
Jessica Severin authored
added check/set of status to 'SYNCHING' right before the synch procedure so as to prevent multiple workers from simultaneously trying to synch at the same time.
-
Jessica Severin authored
-
- 14 Feb, 2005 1 commit
-
-
Jessica Severin authored
will returns 0E0 if 'zero rows are inserted' which perl intreprets are true so I need to check for it explicitly. Also store method now returns 1 on 'new insert' and '0' and 'already stored'.
-
- 10 Feb, 2005 3 commits
-
-
Jessica Severin authored
complete an analysis. If no job has been run (0 msec) it will assume 1 job per worker up to the hive_capacity (maximum parallization). Also changed worker->process_id to be the pid of the process not the ppid.
-
Jessica Severin authored
if it runs properly, the job looks like a normally claimed/fetched/run job
-
Jessica Severin authored
is asked to 're-run' a specific job. By reclaiming, this job is properly processed so when it finishes it looks like it was run normally by the system.
-
- 09 Feb, 2005 1 commit
-
-
Jessica Severin authored
-
- 08 Feb, 2005 1 commit
-
-
Jessica Severin authored
extended display in automation looping to print stats on currently running workers and an overall statistic on the progress of the whole hive (% and total jobs)
-
- 07 Feb, 2005 1 commit
-
-
Jessica Severin authored
-debug <level> : turn on debug messages at <level> -no_cleanup : don't perform global_cleanup when worker exits
-
- 04 Feb, 2005 6 commits
-
-
Jessica Severin authored
better debug system to allow worker to control analysis debug level new batch_size method will use avg_msec_per_job to calculate batch_size on the fly if analysis_stat.batch_size==0
-
Jessica Severin authored
-
Jessica Severin authored
-
Jessica Severin authored
keep analysis_job.input_id as varchar(255) to allow UNIQUE(analysis_id,input_id) but in adaptor added logic so that if input_id in AnalysisJob object exceeds the 255 char limit to store/fetch from the analysis_data table. The input_id in the analysis_job table becomes '_ext_input_analysis_data_id ##' which is a unique internal variable to trigger the fetch routine to know to get the 'real' input_id from the analysis_data table. NO MORE 255 char limit on input_id and completely transparent to API user.
-
Jessica Severin authored
if the hash keys/values are the same.
-
Jessica Severin authored
to AnalysisStats object and store/fetch of avg_msec_per_job from analysis_stats table. Corresponds to schema change. Old hive databases need to do this sql to bring them up to spec; alter table analysis_stats add column avg_msec_per_job int(10) default 0 NOT NULL after batch_size; alter table analysis_stats modify column last_update datetime NOT NULL;
-
- 02 Feb, 2005 1 commit
-
-
Jessica Severin authored
analysis_stats.avg_msec_per_job int modified analysis_stats.last_update to a datetime so that it is only changed when the API tells it to.
-
- 01 Feb, 2005 1 commit
-
-
Jessica Severin authored
-
- 28 Jan, 2005 1 commit
-
-
Abel Ureta-Vidal authored
In run_autonomously method, updated a queen method call to print_running_worker_status. It was wrongly print_worker_status
-
- 19 Jan, 2005 1 commit
-
-
Jessica Severin authored
-
- 18 Jan, 2005 5 commits
-
-
Jessica Severin authored
to 'LOADING' to trigger sync so system knows that something changed
-
Jessica Severin authored
-failed_jobs : show all failed jobs -reset_job_id <num> : reset a job back to READY so it can be rerun -reset_all_jobs_for_analysis_id <num>
-
Jessica Severin authored
-
Jessica Severin authored
added method reset_all_jobs_for_analysis_id to facilitate re-flowing data through new dataflow rules. extended perldoc changed the 'retry count' to 7 (so runs 1 + 7 retrys)
-
Jessica Severin authored
-
- 13 Jan, 2005 5 commits
-
-
Jessica Severin authored
thus allowing the RunnableDBs to do tracking of output data to the specific analysis_job_id
-
Jessica Severin authored
-
Jessica Severin authored
Initially used to manually re-run a job with runWorker.pl -job_id
-
Jessica Severin authored
and Analysis::RunnableDB superclasses
-
Jessica Severin authored
-
- 12 Jan, 2005 1 commit
-
-
Jessica Severin authored
properly handle RunnableDBs that throw exceptions in the fetch_input stage.
-
- 11 Jan, 2005 3 commits
-
-
Jessica Severin authored
changed INSERT syntax to be more SQL compliant
-
Jessica Severin authored
changed INSERT syntax to be more SQL compliant
-
Jessica Severin authored
-