This project is mirrored from https://:*****@github.com/Ensembl/ensembl-hive.git. Pull mirroring updated .
  1. 21 Feb, 2005 1 commit
    • Jessica Severin's avatar
      YAHRF (Yet Another Hive ReFactor).....chapter 1 · 7675c31c
      Jessica Severin authored
      needed to better manage the hive system's load on the database housing all
      the hive related tables (in case the database is overloaded by multiple users).
      Added analysis_stats.sync_lock column (and correspondly in Object and Adaptor)
      Added Queen::safe_synchronize_AnalysisStats method which wraps over the
        synchronize_AnalysisStats method and does various checks and locks to ensure
        that only one worker is trying to do a 'synchronize' on a given analysis at
        any given moment.
      Cleaned up API between Queen/Worker so that worker only talks directly to the
        Queen, rather than getting the underlying database adaptor.
      Added analysis_job columns runtime_msec, query_count to provide more data on
        how the jobs hammer a database (queries/sec).
      7675c31c
  2. 16 Feb, 2005 1 commit
  3. 02 Feb, 2005 1 commit
    • Jessica Severin's avatar
      added column · 5eb83359
      Jessica Severin authored
        analysis_stats.avg_msec_per_job int
      modified
        analysis_stats.last_update to a datetime so that it is only changed when the API tells it to.
      5eb83359
  4. 19 Nov, 2004 2 commits
  5. 09 Nov, 2004 1 commit
  6. 20 Oct, 2004 1 commit
    • Jessica Severin's avatar
      switched back to analysis_job.input_id · 77675743
      Jessica Severin authored
      changed to varchar(255) (but dropped joining to analysis_data table)
      If modules need more than 255 characters of input_id
      they can pass the anaysis_data_id via the varchar(255) : example {adid=>365902}
      77675743
  7. 05 Oct, 2004 1 commit
  8. 30 Sep, 2004 1 commit
    • Jessica Severin's avatar
      modified analysis_job table : replaced input_id varchar(100) with · 2be90ea9
      Jessica Severin authored
      input_analysis_data_id int(10) which joins to analysis_data table.
      added output_analysis_data_id int(10) for storing output_id
      External analysis_data.data is LongText which will allow much longer
      parameter sets to be passed around than was previously possible.
      AnalysisData will also allow processes to manually store 'other' data and
      pass it around via ID reference now.
      2be90ea9
  9. 22 Sep, 2004 1 commit
    • Jessica Severin's avatar
      added analysis_data table and adaptor. · 0c2c5dc5
      Jessica Severin authored
      Estentially a mini filesystem so that data that would normally be stored in
      NFS files and referenced via a path, can now be stored in the database and
      referenced via a dbID. Data is a LONGTEXT.
      Can be used to store configuration data, paramater strings,
      BLOSSUM matrix data, uuencode of binary data .....
      0c2c5dc5
  10. 14 Aug, 2004 1 commit
  11. 16 Jul, 2004 2 commits
  12. 14 Jul, 2004 1 commit
  13. 08 Jul, 2004 1 commit
    • Jessica Severin's avatar
      added hive_id index to analysis_job table to help with dead_worker · 27403dda
      Jessica Severin authored
      job reseting.  This allowed direct UPDATE..WHERE.. sql to be used.
      Also changed the retry_count system: retry_count is only incremented
      for jobs that failed (status in ('GET_INPUT','RUN','WRITE_OUTPUT')).
      Job that were CLAIMED by the dead worker are just reset without
      incrementing the retry_count since they were never attempted to run.
      Also the fetching of claimed jobs now has an 'ORDER BY retry_count'
      so that jobs that have failed are at the bottom of the list of jobs
      to process.  This allows the 'bad' jobs to filter themselves out.
      27403dda
  14. 06 Jul, 2004 1 commit
  15. 17 Jun, 2004 1 commit
  16. 14 Jun, 2004 1 commit
  17. 08 Jun, 2004 1 commit
  18. 27 May, 2004 1 commit
  19. 25 May, 2004 1 commit