This project is mirrored from https://:***** Pull mirroring updated .
  1. 01 Sep, 2010 2 commits
  2. 31 Aug, 2010 1 commit
  3. 24 Aug, 2010 1 commit
  4. 11 Aug, 2010 1 commit
  5. 12 Jul, 2010 1 commit
  6. 27 Apr, 2010 1 commit
  7. 30 Mar, 2010 1 commit
  8. 26 Mar, 2010 1 commit
  9. 04 Mar, 2010 1 commit
  10. 25 Feb, 2010 1 commit
  11. 11 Jan, 2010 1 commit
  12. 11 Dec, 2009 1 commit
  13. 03 Dec, 2009 1 commit
  14. 02 Nov, 2009 1 commit
  15. 13 Oct, 2009 1 commit
  16. 03 Aug, 2009 1 commit
  17. 13 Jul, 2009 1 commit
  18. 03 Apr, 2009 1 commit
  19. 28 May, 2008 1 commit
  20. 16 Nov, 2007 1 commit
  21. 01 Feb, 2007 1 commit
  22. 29 Sep, 2006 1 commit
  23. 03 Feb, 2006 1 commit
  24. 14 Oct, 2005 1 commit
  25. 01 Oct, 2005 1 commit
    • Jessica Severin's avatar
      Improved system for running specific jobs. System now properly runs · f971d01f
      Jessica Severin authored
      jobs that were created outside the database.  Also added ability to not
      write (doesn't execute write_output or dataflow jobs).  Improved logic so
      that if a job is specified, but something is wrong that the worker fails
      rather than reverting to autonomous behaviour mode.
      Fully tested by running a complete homology production test.
  26. 14 Sep, 2005 1 commit
  27. 16 Aug, 2005 1 commit
    • Jessica Severin's avatar
      added system for job-level blocking/unblocking. This is a very fine grain · faead1e0
      Jessica Severin authored
      control structure where a process/program has been made aware of the job(s)
      they are responsible for controlling.  This is facilited via a job url:
      AnalysisJobAdptor::CreateNewJob now returns this url on job creation.
      When a job is datflowed, an array of these urls is returned (one for each rule).
      Jobs can now be dataflowed from a Process subclass with blocking enabled.
      A job can be fetched directly with one of these URLs.
      A commandline utility has been added to unblock a url job.
      To unblock a job do:
      This is primarily useful in asynchronous split process/parsing situations.
  28. 13 Jun, 2005 1 commit
    • Jessica Severin's avatar
      - moved worker/process code related to persistant /tmp/worker_## directory · c8c45241
      Jessica Severin authored
        into the Worker object (and out of the Process)
      - added Process::worker method so that running processes can talk to the
        worker that is currently running itself.
      - modified system so that if a process subclass uses Process::dataflow_output_id
        on branch_code 1, it will turn off the automatic of flowing of the input_job
        out on branch_code 1.  This will make coding much cleaner so that processes
        no longer need to modifiy the input_id of the input_job
      - added method Process::autoflow_inputjob which toggles this autoflow behaviour
        if a subclass would like to modify this directly
      - auto_dataflow now happens right after the Process::write_output stage
  29. 08 Mar, 2005 1 commit
  30. 03 Mar, 2005 3 commits
  31. 02 Mar, 2005 1 commit
  32. 23 Feb, 2005 3 commits
  33. 21 Feb, 2005 1 commit
    • Jessica Severin's avatar
      YAHRF (Yet Another Hive ReFactor).....chapter 1 · 7675c31c
      Jessica Severin authored
      needed to better manage the hive system's load on the database housing all
      the hive related tables (in case the database is overloaded by multiple users).
      Added analysis_stats.sync_lock column (and correspondly in Object and Adaptor)
      Added Queen::safe_synchronize_AnalysisStats method which wraps over the
        synchronize_AnalysisStats method and does various checks and locks to ensure
        that only one worker is trying to do a 'synchronize' on a given analysis at
        any given moment.
      Cleaned up API between Queen/Worker so that worker only talks directly to the
        Queen, rather than getting the underlying database adaptor.
      Added analysis_job columns runtime_msec, query_count to provide more data on
        how the jobs hammer a database (queries/sec).
  34. 17 Feb, 2005 1 commit
    • Jessica Severin's avatar
      added method AnalysisStatsAdaptor::increment_needed_workers · af273c18
      Jessica Severin authored
      called when worker dies to replace itself in the needed_workers count since
      it's decremented when it's born, and it's counted as living (and subtracted)
      as long as it's running.  This gunarantees that another worker will quickly
      be created after this one dies (and not need to wait for a synch to happen)
  35. 16 Feb, 2005 1 commit
    • Jessica Severin's avatar
      improved dynamic synch logic. Only case where the 5 minute delay is needed · da413295
      Jessica Severin authored
      is when there are lots of workers 'WORKING' so as to avoid them falling over
      each other.  The 'WORKING' state only exists in the middle of a large run.
      When the last worker dies, the state is 'ALL_CLAIMED' so the sync on death
      will happen properly.  As the last pile of workers die they will all do
      a synch, but that's OK since the system needs to be properly synched when
      the last one dies since there won't be anybody left to do it.
      Also added 10 minute check for if already 'SYNCHING' to deal with case if
      worker dies in the middle of 'SYNCHING'.