This project is mirrored from https://:*****@github.com/Ensembl/ensembl-hive.git.
Pull mirroring updated .
- Aug 06, 2004
-
-
Jessica Severin authored
-
- Aug 04, 2004
-
-
Jessica Severin authored
-
- Aug 03, 2004
-
-
Jessica Severin authored
turn disconnect ON, when there will be lots of them and they have moments when there will be little DB activity. The new disconnect system disconnects so much, that it's slower than before, so must use sparingly.
-
Jessica Severin authored
output jobs) is it needs a fast database, so don't disconnect_when_inactive
-
Jessica Severin authored
-
Jessica Severin authored
created new() methods where needed, replaced throw, rearrange as needed
-
- Aug 02, 2004
-
-
Jessica Severin authored
-
Jessica Severin authored
-
Jessica Severin authored
-
Jessica Severin authored
If type is defined in url string, that will take precedence. Bio::EnsEMBL::Hive::URLFactory->fetch($url, 'compara'); #defaults to compara DBA Bio::EnsEMBL::Hive::URLFactory->fetch($url); #defaults to hive DBA
-
Jessica Severin authored
-
Jessica Severin authored
inherit from them (as in $dba->dbc->)
-
Jessica Severin authored
eg: mysql://ensro@ecs4:3352/ensembl_compara_22_1;type=compara current types are (core, hive, compara, pipeline) (default is hive) Also changed to a global class instance so that cleanup can be automatic via DESTROY method. Deprecated cleanup method.
-
- Jul 29, 2004
-
-
Ian Longden authored
-
- Jul 27, 2004
-
-
Jessica Severin authored
as DONE now with LSF, and EXIT will be reserved for true failure cases
-
- Jul 21, 2004
-
-
Jessica Severin authored
blocking can occurr both at the job level and the analysis level. To block and unblock at the job level will require specific analyses to determine logic, and will not be implemented in a generic way within the hive system.
-
- Jul 16, 2004
-
-
Jessica Severin authored
-
Jessica Severin authored
-
Jessica Severin authored
on failure and retry_count>=5. Also changed Queen analysis summary to classify an analysis as 'DONE' when all jobs are either DONE or FAILED and hence allow the processing to proceed forward.
-
- Jul 15, 2004
-
-
Jessica Severin authored
it syncs and displays a full summary of the state of the hive including what workers were overdue, how many are needed, and what workers are running. Also changed the check_for_dead to use the LSF bjobs command since I now store an LSF job_id and array_index in the process_id for LSF workers. Also changed the overdue time limit to 75minutes since the expected lifetime is 60minutes.
-
Jessica Severin authored
as well also set the beekeeper to LSF. Commented this.
-
Jessica Severin authored
to prevent excessive workers from saturating the system.
-
Jessica Severin authored
when a bsub is issued without a job array and bjobs does not like 234234[0] as a job_id
-
Jessica Severin authored
added to a users path and the programs run from any directory
-
Jessica Severin authored
which causes a more even distribution of workers types as they are created by the Queen.
-
- Jul 14, 2004
-
-
Jessica Severin authored
lsf job arrays (e.g. 72344[3]). Also changed analysis_job index (analysis_id, status) so that the analysis_id is indexed first which provides better indexing when only the analysis_id is specified.
-
Jessica Severin authored
also changed time intervals (overdue workers now to 75 minutes and poll interval to 5 minutes). The polling load to check hive status is essentially zero.
-
Jessica Severin authored
Can't seem to figure out way of passing lsf job_id and array_index as parameter so hardcoded access of environment variables LSB_JOBID and LSB_JOBINDEX inside runWorker script. If both variables are set this should imply that the worker was created by an lsf deamon and to use these values to check the worker's life state in time and space. Otherwise the process_id will fall back on the ppid of the process (on the 'host' it's running on).
-
Jessica Severin authored
setting (e.g. lsf job_id and array_index
-
Jessica Severin authored
are stored in the worker.out output file.
-
- Jul 13, 2004
-
-
Jessica Severin authored
workers currently running
-
Jessica Severin authored
are running so that any unregistered worker can be assumed to be a fatality and it's jobs reset. Also added -loop option to run the beekeeper in an autonomous manner.
-
Jessica Severin authored
the analysis has effectively no system load and an unlimited number of workers of that analysis can run at any one time.
-
Jessica Severin authored
added get_num_needed_workers method which does a load analysis between the living workers and the workers needed to complete the available jobs. Returns a simple count which a beekeeper can use to allocate workers on computers. The workers are created without specific analyses but get assigned one by the Queen when they are created.
-
Jessica Severin authored
system load (higher hive_capacity) are picked first
-
- Jul 09, 2004
-
-
Jessica Severin authored
Also added functionality so that runWorker can be run without specification of an analysis. The create_new_worker method now will query for a 'needed worker' analysis from the AnalysisStats adaptor when the analysis_id is undef. This simplifies the API interface between the Queen and the beekeepers. Now the beekeeper only needs to receive a count of workers. The workers can still be run with explicit analyses for testing or situations where one wants to manually control the processing. Now one can simply do bsub -JW[1-100] runWorker -url mysql://ensadmin:<pass>@ecs2:3361/compara_hive_jess_23 to create 100 workers which will become whatever analysis that needs to be done.
-
Jessica Severin authored
-
Jessica Severin authored
decrement_needed_workers method. These are used by the Queen to pick an analysis for a newly created worker when one wasn't specified
-
Jessica Severin authored
-
Jessica Severin authored
-