This project is mirrored from https://:*****@github.com/Ensembl/ensembl-hive.git.
Pull mirroring updated .
- Oct 19, 2010
-
-
Leo Gordon authored
-
- Sep 30, 2010
-
-
Leo Gordon authored
-
- Sep 26, 2010
-
-
Leo Gordon authored
-
- Sep 22, 2010
-
-
Leo Gordon authored
-
- Sep 20, 2010
-
-
Leo Gordon authored
-
Leo Gordon authored
-
Leo Gordon authored
-
Leo Gordon authored
-
- Sep 13, 2010
-
-
Leo Gordon authored
restructured parameters, so they are now a part of AnalysisJob and can be manipulated outside of a living worker/process
-
- Sep 04, 2010
-
-
Leo Gordon authored
-
- Sep 03, 2010
-
-
Leo Gordon authored
-
- Sep 02, 2010
-
-
Leo Gordon authored
-
Leo Gordon authored
-
Leo Gordon authored
-
- Sep 01, 2010
-
-
Leo Gordon authored
-
- Jun 13, 2010
-
-
Leo Gordon authored
-
- Mar 26, 2010
-
-
Leo Gordon authored
-
- Jan 11, 2010
-
-
Leo Gordon authored
-
- Dec 11, 2009
-
-
Leo Gordon authored
-
- Dec 03, 2009
-
-
Leo Gordon authored
-
- Oct 16, 2009
-
-
Kathryn Beal authored
-
- Sep 23, 2009
-
-
Leo Gordon authored
-
- Jul 13, 2009
-
-
Leo Gordon authored
-
- Apr 03, 2009
-
-
Albert Vilella authored
-
- Feb 15, 2009
-
-
Will Spooner authored
-
- May 28, 2008
-
-
Javier Herrero authored
-
- Nov 16, 2007
-
-
Javier Herrero authored
-
- Oct 12, 2006
-
-
Albert Vilella authored
the deletion of the method is done in the right place now - still, be careful about using this method - Albert Vilella and Michael Schuster
-
- Sep 04, 2006
-
-
Albert Vilella authored
adding a remove_analysis_id method that will DELETE FROM analysis, analysis_stats, and analysis_job table WHERE analysis_id equals the number given
-
- Jun 12, 2006
-
-
Javier Herrero authored
-
- Oct 01, 2005
-
-
Jessica Severin authored
there is no longer the possibility that a worker might accidentally claim the job it just failed on so there is no longer a need to check the hive_id of the job when claiming. Removed check for hive_id
-
- Aug 16, 2005
-
-
Jessica Severin authored
control structure where a process/program has been made aware of the job(s) they are responsible for controlling. This is facilited via a job url: mysql://ia64e:3306/jessica_compara32b_tree/analysis_job?dbID=6065355 AnalysisJobAdptor::CreateNewJob now returns this url on job creation. When a job is datflowed, an array of these urls is returned (one for each rule). Jobs can now be dataflowed from a Process subclass with blocking enabled. A job can be fetched directly with one of these URLs. A commandline utility ehive_unblock.pl has been added to unblock a url job. To unblock a job do: Bio::EnsEMBL::Hive::URLFactory->fetch($url)->update_status('READY'); This is primarily useful in asynchronous split process/parsing situations.
-
- Aug 11, 2005
-
-
Jessica Severin authored
I needed to add in a check to prevent the worker from grabbing the same job back and trying to run it again. The retry works best when the job is run on a different machine at a different moment in time (ie a different hive_id). This randomizes the run environment.
-
- Aug 09, 2005
-
-
Jessica Severin authored
but on a specific job. For new system which catches job exceptions and fails that job, but allows the worker to continue working.
-
- Jun 13, 2005
-
-
Jessica Severin authored
have not been run before (< retry_count)
-
- Mar 04, 2005
-
-
Jessica Severin authored
added columns hive_id and retry. Allows user to join to failed workers in the hive table, and to see which retry level the job was at when the STDOUT/STDERR files were generated. Sets at beginning of job run, and deletes those for 'empty' files at job end.
-
- Mar 02, 2005
-
-
Jessica Severin authored
-
- Feb 23, 2005
-
-
Jessica Severin authored
when debugging an analysis which fails and would increment the retry_count.
-
- Feb 21, 2005
-
-
Jessica Severin authored
needed to better manage the hive system's load on the database housing all the hive related tables (in case the database is overloaded by multiple users). Added analysis_stats.sync_lock column (and correspondly in Object and Adaptor) Added Queen::safe_synchronize_AnalysisStats method which wraps over the synchronize_AnalysisStats method and does various checks and locks to ensure that only one worker is trying to do a 'synchronize' on a given analysis at any given moment. Cleaned up API between Queen/Worker so that worker only talks directly to the Queen, rather than getting the underlying database adaptor. Added analysis_job columns runtime_msec, query_count to provide more data on how the jobs hammer a database (queries/sec).
-
- Feb 10, 2005
-
-
Jessica Severin authored
is asked to 're-run' a specific job. By reclaiming, this job is properly processed so when it finishes it looks like it was run normally by the system.
-