This project is mirrored from https://:***** Pull mirroring updated .
  1. 14 Sep, 2005 1 commit
  2. 16 Aug, 2005 1 commit
    • Jessica Severin's avatar
      added system for job-level blocking/unblocking. This is a very fine grain · faead1e0
      Jessica Severin authored
      control structure where a process/program has been made aware of the job(s)
      they are responsible for controlling.  This is facilited via a job url:
      AnalysisJobAdptor::CreateNewJob now returns this url on job creation.
      When a job is datflowed, an array of these urls is returned (one for each rule).
      Jobs can now be dataflowed from a Process subclass with blocking enabled.
      A job can be fetched directly with one of these URLs.
      A commandline utility has been added to unblock a url job.
      To unblock a job do:
      This is primarily useful in asynchronous split process/parsing situations.
  3. 13 Jun, 2005 1 commit
    • Jessica Severin's avatar
      - moved worker/process code related to persistant /tmp/worker_## directory · c8c45241
      Jessica Severin authored
        into the Worker object (and out of the Process)
      - added Process::worker method so that running processes can talk to the
        worker that is currently running itself.
      - modified system so that if a process subclass uses Process::dataflow_output_id
        on branch_code 1, it will turn off the automatic of flowing of the input_job
        out on branch_code 1.  This will make coding much cleaner so that processes
        no longer need to modifiy the input_id of the input_job
      - added method Process::autoflow_inputjob which toggles this autoflow behaviour
        if a subclass would like to modify this directly
      - auto_dataflow now happens right after the Process::write_output stage
  4. 08 Mar, 2005 1 commit
  5. 03 Mar, 2005 3 commits
  6. 02 Mar, 2005 1 commit
  7. 23 Feb, 2005 3 commits
  8. 21 Feb, 2005 1 commit
    • Jessica Severin's avatar
      YAHRF (Yet Another Hive ReFactor).....chapter 1 · 7675c31c
      Jessica Severin authored
      needed to better manage the hive system's load on the database housing all
      the hive related tables (in case the database is overloaded by multiple users).
      Added analysis_stats.sync_lock column (and correspondly in Object and Adaptor)
      Added Queen::safe_synchronize_AnalysisStats method which wraps over the
        synchronize_AnalysisStats method and does various checks and locks to ensure
        that only one worker is trying to do a 'synchronize' on a given analysis at
        any given moment.
      Cleaned up API between Queen/Worker so that worker only talks directly to the
        Queen, rather than getting the underlying database adaptor.
      Added analysis_job columns runtime_msec, query_count to provide more data on
        how the jobs hammer a database (queries/sec).
  9. 17 Feb, 2005 1 commit
    • Jessica Severin's avatar
      added method AnalysisStatsAdaptor::increment_needed_workers · af273c18
      Jessica Severin authored
      called when worker dies to replace itself in the needed_workers count since
      it's decremented when it's born, and it's counted as living (and subtracted)
      as long as it's running.  This gunarantees that another worker will quickly
      be created after this one dies (and not need to wait for a synch to happen)
  10. 16 Feb, 2005 4 commits
  11. 10 Feb, 2005 1 commit
  12. 11 Jan, 2005 1 commit
  13. 08 Jan, 2005 1 commit
  14. 14 Dec, 2004 1 commit
  15. 09 Dec, 2004 1 commit
  16. 25 Nov, 2004 2 commits
  17. 24 Nov, 2004 1 commit
  18. 20 Nov, 2004 1 commit
    • Jessica Severin's avatar
      New distributed Queen system. Queen/hive updates its state in an incremental · e3d44c7e
      Jessica Severin authored
      and distributed manner as it interacts with the workers over the course of its life.
      When a script starts and asks a queen to create a worker the queen has
      a list of known analyses which are 'above the surface' where full hive analysis has
      been done and the number of needed workers has been calculated. Full synch requires
      joining data between the analysis, analysis_job, analysis_stats, and hive tables.
      When this reached 10e7 jobs, 10e4 analyses, 10e3 workers a full hard sync took minutes
      and it was clear this bit of the system wasn't scaling and wasn't going to make it
      to the next order of magnitude. This occurred in the compara blastz pipeline between
      mouse and rat.
      Now there are some analyses 'below the surface' that have partial synchronization.
      These analyses have been flagged as having 'x' new jobs (AnalysisJobAdaptor updating
      analysis_stats on job insert).  If no analysis is found to asign to the newly
      created worker, the queen will dip below the surface and start checking
      the analyses with the highest probablity of needing the most workers.
      This incremental sync is also done in Queen::get_num_needed_workers
      When calculating ahead a total worker count, this routine will also dip below
      the surface until the hive reaches it's current defined worker saturation.
      A beekeeper is no longer a required component for the system to function.
      If workers can get onto cpus the hive will run.  The beekeeper is now mainly a
      user display program showing the status of the hive.  There is no longer any
      central process doing work and one hive can potentially scale
      beyond 10e9 jobs in graphs of 10e6 analysis nodes and 10e6 running workers.
  19. 09 Nov, 2004 2 commits
    • Jessica Severin's avatar
      reformated code (removed all the tabs) · 088529b5
      Jessica Severin authored
    • Jessica Severin's avatar
      refactored synchronization logic to allow for worker distributed syncing. · e6fb56d1
      Jessica Severin authored
      The synchronization of the analysis_stat summary statistics was done by
      the beekeeper at the top of it's loop.  For graphs with 40,000+ analyses
      this centralized syncing became a bottle neck.  This new system allows
      the Queen attached to each worker process to synchronize it's analysis.
      Syncing happens when a worker 'checks in' and when it dies.  The sync on
      'check in' only updates if the stats are >60secs out of date to prevent
      over syncing.
      The beekeeper still needs to do whole system syncs when a subsection has
      finished and the next section needs to be 'unblocked'.  For homology this
      will happen 2 times in a 16 hour run.
  20. 20 Oct, 2004 1 commit
  21. 12 Oct, 2004 1 commit
  22. 11 Aug, 2004 2 commits
  23. 06 Aug, 2004 1 commit
  24. 03 Aug, 2004 1 commit
  25. 16 Jul, 2004 2 commits
  26. 15 Jul, 2004 1 commit
  27. 14 Jul, 2004 1 commit
  28. 13 Jul, 2004 2 commits