Commit baa146d3 authored by Leo Gordon's avatar Leo Gordon
Browse files

big renaming of hive->worker, analysis_job->job, analysis_job_file->job_file,...

big renaming of hive->worker, analysis_job->job, analysis_job_file->job_file, analysis_job_id->job_id, worker.beekeeper->worker.meadow_type, etc
parent 7dbe9126
...@@ -63,16 +63,16 @@ ...@@ -63,16 +63,16 @@
or run it in step-by-step mode, initiating every step by separate executions of 'beekeeper.pl ... -run' command. or run it in step-by-step mode, initiating every step by separate executions of 'beekeeper.pl ... -run' command.
We will use the step-by-step mode in order to see what is going on. We will use the step-by-step mode in order to see what is going on.
5.4 Go to mysql window and check the contents of analysis_job table: 5.4 Go to mysql window and check the contents of job table:
MySQL> SELECT * FROM analysis_job; MySQL> SELECT * FROM job;
It will only contain jobs that set up the multiplication tasks in 'READY' mode - meaning 'ready to be taken by workers and executed'. It will only contain jobs that set up the multiplication tasks in 'READY' mode - meaning 'ready to be taken by workers and executed'.
Go to the beekeeper window and run the 'beekeeper.pl ... -run' once. Go to the beekeeper window and run the 'beekeeper.pl ... -run' once.
It will submit a worker to the farm that will at some point get the 'start' job(s). It will submit a worker to the farm that will at some point get the 'start' job(s).
5.5 Go to mysql window again and check the contents of analysis_job table. Keep checking as the worker may spend some time in 'pending' state. 5.5 Go to mysql window again and check the contents of job table. Keep checking as the worker may spend some time in 'pending' state.
After the first worker is done you will see that 'start' jobs are now done and new 'part_multiply' and 'add_together' jobs have been created. After the first worker is done you will see that 'start' jobs are now done and new 'part_multiply' and 'add_together' jobs have been created.
Also check the contents of 'intermediate_result' table, it should be empty at that moment: Also check the contents of 'intermediate_result' table, it should be empty at that moment:
...@@ -82,7 +82,7 @@ ...@@ -82,7 +82,7 @@
Go back to the beekeeper window and run the 'beekeeper.pl ... -run' for the second time. Go back to the beekeeper window and run the 'beekeeper.pl ... -run' for the second time.
It will submit another worker to the farm that will at some point get the 'part_multiply' jobs. It will submit another worker to the farm that will at some point get the 'part_multiply' jobs.
5.6 Now check both 'analysis_job' and 'intermediate_result' tables again. 5.6 Now check both 'job' and 'intermediate_result' tables again.
At some moment 'part_multiply' jobs will have been completed and the results will go into 'intermediate_result' table; At some moment 'part_multiply' jobs will have been completed and the results will go into 'intermediate_result' table;
'add_together' jobs are still to be done. 'add_together' jobs are still to be done.
......
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
Each worker is linked to an analysis_id, registers its self on creation Each worker is linked to an analysis_id, registers its self on creation
into the Hive, creates a RunnableDB instance of the Analysis->module, into the Hive, creates a RunnableDB instance of the Analysis->module,
gets relevant configuration information from the database, does its gets relevant configuration information from the database, does its
work, creates the next layer of analysis_job entries by interfacing to work, creates the next layer of job entries by interfacing to
the DataflowRuleAdaptor to determine the analyses it needs to pass its the DataflowRuleAdaptor to determine the analyses it needs to pass its
output data to and creates jobs on the database of the next analysis. output data to and creates jobs on the database of the next analysis.
It repeats this cycle until it has lived its lifetime or until there are no It repeats this cycle until it has lived its lifetime or until there are no
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
The Queen's primary job is to create Workers to get the work down. The Queen's primary job is to create Workers to get the work down.
As part of this, she is also responsible for summarizing the status of the As part of this, she is also responsible for summarizing the status of the
analyses by querying the analysis_jobs, summarizing, and updating the analyses by querying the jobs, summarizing, and updating the
analysis_stats table. From this she is also responsible for monitoring and analysis_stats table. From this she is also responsible for monitoring and
'unblocking' analyses via the analysis_ctrl_rules. 'unblocking' analyses via the analysis_ctrl_rules.
The Queen is also responsible for freeing up jobs that were claimed by Workers The Queen is also responsible for freeing up jobs that were claimed by Workers
......
...@@ -59,9 +59,9 @@ use base ('Bio::EnsEMBL::DBSQL::BaseAdaptor'); ...@@ -59,9 +59,9 @@ use base ('Bio::EnsEMBL::DBSQL::BaseAdaptor');
Args : -input_id => string of input_id which will be passed to run the job (or a Perl hash that will be automagically stringified) Args : -input_id => string of input_id which will be passed to run the job (or a Perl hash that will be automagically stringified)
-analysis => Bio::EnsEMBL::Analysis object from a database -analysis => Bio::EnsEMBL::Analysis object from a database
-block => int(0,1) set blocking state of job (default = 0) -block => int(0,1) set blocking state of job (default = 0)
-input_job_id => (optional) analysis_job_id of job that is creating this -input_job_id => (optional) job_id of job that is creating this
job. Used purely for book keeping. job. Used purely for book keeping.
Example : $analysis_job_id = Bio::EnsEMBL::Hive::DBSQL::AnalysisJobAdaptor->CreateNewJob( Example : $job_id = Bio::EnsEMBL::Hive::DBSQL::AnalysisJobAdaptor->CreateNewJob(
-input_id => 'my input data', -input_id => 'my input data',
-analysis => $myAnalysis); -analysis => $myAnalysis);
Description: uses the analysis object to get the db connection from the adaptor to store a new Description: uses the analysis object to get the db connection from the adaptor to store a new
...@@ -69,7 +69,7 @@ use base ('Bio::EnsEMBL::DBSQL::BaseAdaptor'); ...@@ -69,7 +69,7 @@ use base ('Bio::EnsEMBL::DBSQL::BaseAdaptor');
Also updates corresponding analysis_stats by incrementing total_job_count, Also updates corresponding analysis_stats by incrementing total_job_count,
unclaimed_job_count and flagging the incremental update by changing the status unclaimed_job_count and flagging the incremental update by changing the status
to 'LOADING' (but only if the analysis is not blocked). to 'LOADING' (but only if the analysis is not blocked).
Returntype : int analysis_job_id on database analysis is from. Returntype : int job_id on database analysis is from.
Exceptions : thrown if either -input_id or -analysis are not properly defined Exceptions : thrown if either -input_id or -analysis are not properly defined
Caller : general Caller : general
...@@ -80,7 +80,7 @@ sub CreateNewJob { ...@@ -80,7 +80,7 @@ sub CreateNewJob {
return undef unless(scalar @args); return undef unless(scalar @args);
my ($input_id, $analysis, $prev_analysis_job_id, $blocked, $semaphore_count, $semaphored_job_id) = my ($input_id, $analysis, $prev_job_id, $blocked, $semaphore_count, $semaphored_job_id) =
rearrange([qw(INPUT_ID ANALYSIS INPUT_JOB_ID BLOCK SEMAPHORE_COUNT SEMAPHORED_JOB_ID)], @args); rearrange([qw(INPUT_ID ANALYSIS INPUT_JOB_ID BLOCK SEMAPHORE_COUNT SEMAPHORED_JOB_ID)], @args);
throw("must define input_id") unless($input_id); throw("must define input_id") unless($input_id);
...@@ -99,15 +99,15 @@ sub CreateNewJob { ...@@ -99,15 +99,15 @@ sub CreateNewJob {
$input_id = "_ext_input_analysis_data_id $input_data_id"; $input_id = "_ext_input_analysis_data_id $input_data_id";
} }
my $sql = q{INSERT ignore into analysis_job my $sql = q{INSERT ignore into job
(input_id, prev_analysis_job_id,analysis_id,status,semaphore_count,semaphored_job_id) (input_id, prev_job_id,analysis_id,status,semaphore_count,semaphored_job_id)
VALUES (?,?,?,?,?,?)}; VALUES (?,?,?,?,?,?)};
my $status = $blocked ? 'BLOCKED' : 'READY'; my $status = $blocked ? 'BLOCKED' : 'READY';
my $dbc = $analysis->adaptor->db->dbc; my $dbc = $analysis->adaptor->db->dbc;
my $sth = $dbc->prepare($sql); my $sth = $dbc->prepare($sql);
$sth->execute($input_id, $prev_analysis_job_id, $analysis->dbID, $status, $semaphore_count, $semaphored_job_id); $sth->execute($input_id, $prev_job_id, $analysis->dbID, $status, $semaphore_count, $semaphored_job_id);
my $job_id = $sth->{'mysql_insertid'}; my $job_id = $sth->{'mysql_insertid'};
$sth->finish; $sth->finish;
...@@ -131,7 +131,7 @@ sub CreateNewJob { ...@@ -131,7 +131,7 @@ sub CreateNewJob {
Arg [1] : int $id Arg [1] : int $id
the unique database identifier for the feature to be obtained the unique database identifier for the feature to be obtained
Example : $feat = $adaptor->fetch_by_dbID(1234); Example : $feat = $adaptor->fetch_by_dbID(1234);
Description: Returns the AnalysisJob defined by the analysis_job_id $id. Description: Returns the AnalysisJob defined by the job_id $id.
Returntype : Bio::EnsEMBL::Hive::AnalysisJob Returntype : Bio::EnsEMBL::Hive::AnalysisJob
Exceptions : thrown if $id is not defined Exceptions : thrown if $id is not defined
Caller : general Caller : general
...@@ -191,8 +191,8 @@ sub fetch_all { ...@@ -191,8 +191,8 @@ sub fetch_all {
sub fetch_all_failed_jobs { sub fetch_all_failed_jobs {
my ($self,$analysis_id) = @_; my ($self,$analysis_id) = @_;
my $constraint = "a.status='FAILED'"; my $constraint = "j.status='FAILED'";
$constraint .= " AND a.analysis_id=$analysis_id" if($analysis_id); $constraint .= " AND j.analysis_id=$analysis_id" if($analysis_id);
return $self->_generic_fetch($constraint); return $self->_generic_fetch($constraint);
} }
...@@ -200,7 +200,7 @@ sub fetch_all_failed_jobs { ...@@ -200,7 +200,7 @@ sub fetch_all_failed_jobs {
sub fetch_all_incomplete_jobs_by_worker_id { sub fetch_all_incomplete_jobs_by_worker_id {
my ($self, $worker_id) = @_; my ($self, $worker_id) = @_;
my $constraint = "a.status IN ('COMPILATION','GET_INPUT','RUN','WRITE_OUTPUT') AND a.worker_id='$worker_id'"; my $constraint = "j.status IN ('COMPILATION','GET_INPUT','RUN','WRITE_OUTPUT') AND j.worker_id='$worker_id'";
return $self->_generic_fetch($constraint); return $self->_generic_fetch($constraint);
} }
...@@ -281,25 +281,25 @@ sub _generic_fetch { ...@@ -281,25 +281,25 @@ sub _generic_fetch {
sub _tables { sub _tables {
my $self = shift; my $self = shift;
return (['analysis_job', 'a']); return (['job', 'j']);
} }
sub _columns { sub _columns {
my $self = shift; my $self = shift;
return qw (a.analysis_job_id return qw (j.job_id
a.prev_analysis_job_id j.prev_job_id
a.analysis_id j.analysis_id
a.input_id j.input_id
a.worker_id j.worker_id
a.status j.status
a.retry_count j.retry_count
a.completed j.completed
a.runtime_msec j.runtime_msec
a.query_count j.query_count
a.semaphore_count j.semaphore_count
a.semaphored_job_id j.semaphored_job_id
); );
} }
...@@ -330,7 +330,7 @@ sub _objs_from_sth { ...@@ -330,7 +330,7 @@ sub _objs_from_sth {
: $column{'input_id'}; : $column{'input_id'};
my $job = Bio::EnsEMBL::Hive::AnalysisJob->new( my $job = Bio::EnsEMBL::Hive::AnalysisJob->new(
-DBID => $column{'analysis_job_id'}, -DBID => $column{'job_id'},
-ANALYSIS_ID => $column{'analysis_id'}, -ANALYSIS_ID => $column{'analysis_id'},
-INPUT_ID => $input_id, -INPUT_ID => $input_id,
-WORKER_ID => $column{'worker_id'}, -WORKER_ID => $column{'worker_id'},
...@@ -362,7 +362,7 @@ sub decrease_semaphore_count_for_jobid { # used in semaphore annihilation or ...@@ -362,7 +362,7 @@ sub decrease_semaphore_count_for_jobid { # used in semaphore annihilation or
my $jobid = shift @_; my $jobid = shift @_;
my $dec = shift @_ || 1; my $dec = shift @_ || 1;
my $sql = "UPDATE analysis_job SET semaphore_count=semaphore_count-? WHERE analysis_job_id=?"; my $sql = "UPDATE job SET semaphore_count=semaphore_count-? WHERE job_id=?";
my $sth = $self->prepare($sql); my $sth = $self->prepare($sql);
$sth->execute($dec, $jobid); $sth->execute($dec, $jobid);
...@@ -374,7 +374,7 @@ sub increase_semaphore_count_for_jobid { # used in semaphore propagation ...@@ -374,7 +374,7 @@ sub increase_semaphore_count_for_jobid { # used in semaphore propagation
my $jobid = shift @_; my $jobid = shift @_;
my $inc = shift @_ || 1; my $inc = shift @_ || 1;
my $sql = "UPDATE analysis_job SET semaphore_count=semaphore_count+? WHERE analysis_job_id=?"; my $sql = "UPDATE job SET semaphore_count=semaphore_count+? WHERE job_id=?";
my $sth = $self->prepare($sql); my $sth = $self->prepare($sql);
$sth->execute($inc, $jobid); $sth->execute($inc, $jobid);
...@@ -386,7 +386,7 @@ sub increase_semaphore_count_for_jobid { # used in semaphore propagation ...@@ -386,7 +386,7 @@ sub increase_semaphore_count_for_jobid { # used in semaphore propagation
Arg [1] : $analysis_id Arg [1] : $analysis_id
Example : Example :
Description: updates the analysis_job.status in the database Description: updates the job.status in the database
Returntype : Returntype :
Exceptions : Exceptions :
Caller : general Caller : general
...@@ -396,7 +396,7 @@ sub increase_semaphore_count_for_jobid { # used in semaphore propagation ...@@ -396,7 +396,7 @@ sub increase_semaphore_count_for_jobid { # used in semaphore propagation
sub update_status { sub update_status {
my ($self, $job) = @_; my ($self, $job) = @_;
my $sql = "UPDATE analysis_job SET status='".$job->status."' "; my $sql = "UPDATE job SET status='".$job->status."' ";
if($job->status eq 'DONE') { if($job->status eq 'DONE') {
$sql .= ",completed=now()"; $sql .= ",completed=now()";
...@@ -407,7 +407,7 @@ sub update_status { ...@@ -407,7 +407,7 @@ sub update_status {
} elsif($job->status eq 'READY') { } elsif($job->status eq 'READY') {
} }
$sql .= " WHERE analysis_job_id='".$job->dbID."' "; $sql .= " WHERE job_id='".$job->dbID."' ";
# This particular query is infamous for collisions and 'deadlock' situations; let's make them wait and retry. # This particular query is infamous for collisions and 'deadlock' situations; let's make them wait and retry.
foreach (0..3) { foreach (0..3) {
...@@ -445,12 +445,12 @@ sub store_out_files { ...@@ -445,12 +445,12 @@ sub store_out_files {
return unless($job); return unless($job);
my $sql = sprintf("DELETE from analysis_job_file WHERE worker_id=%d and analysis_job_id=%d", my $sql = sprintf("DELETE from job_file WHERE worker_id=%d and job_id=%d",
$job->worker_id, $job->dbID); $job->worker_id, $job->dbID);
$self->dbc->do($sql); $self->dbc->do($sql);
return unless($job->stdout_file or $job->stderr_file); return unless($job->stdout_file or $job->stderr_file);
$sql = "INSERT ignore INTO analysis_job_file (analysis_job_id, worker_id, retry, type, path) VALUES "; $sql = "INSERT ignore INTO job_file (job_id, worker_id, retry, type, path) VALUES ";
if($job->stdout_file) { if($job->stdout_file) {
$sql .= sprintf("(%d,%d,%d,'STDOUT','%s')", $job->dbID, $job->worker_id, $sql .= sprintf("(%d,%d,%d,'STDOUT','%s')", $job->dbID, $job->worker_id,
$job->retry_count, $job->stdout_file); $job->retry_count, $job->stdout_file);
...@@ -484,10 +484,10 @@ sub grab_jobs_for_worker { ...@@ -484,10 +484,10 @@ sub grab_jobs_for_worker {
my ($self, $worker) = @_; my ($self, $worker) = @_;
my $analysis_id = $worker->analysis->dbID(); my $analysis_id = $worker->analysis->dbID();
my $worker_id = $worker->worker_id(); my $worker_id = $worker->dbID();
my $sql_base = qq{ my $sql_base = qq{
UPDATE analysis_job UPDATE job
SET worker_id='$worker_id', status='CLAIMED' SET worker_id='$worker_id', status='CLAIMED'
WHERE analysis_id='$analysis_id' AND status='READY' AND semaphore_count<=0 WHERE analysis_id='$analysis_id' AND status='READY' AND semaphore_count<=0
}; };
...@@ -504,7 +504,7 @@ sub grab_jobs_for_worker { ...@@ -504,7 +504,7 @@ sub grab_jobs_for_worker {
$claim_count = $self->dbc->do($sql_any); $claim_count = $self->dbc->do($sql_any);
} }
my $constraint = "a.analysis_id='$analysis_id' AND a.worker_id='$worker_id' AND a.status='CLAIMED'"; my $constraint = "j.analysis_id='$analysis_id' AND j.worker_id='$worker_id' AND j.status='CLAIMED'";
return $self->_generic_fetch($constraint); return $self->_generic_fetch($constraint);
} }
...@@ -514,16 +514,16 @@ sub reclaim_job_for_worker { ...@@ -514,16 +514,16 @@ sub reclaim_job_for_worker {
my $worker = shift or return; my $worker = shift or return;
my $job = shift or return; my $job = shift or return;
my $worker_id = $worker->worker_id(); my $worker_id = $worker->dbID();
my $job_id = $job->dbID; my $job_id = $job->dbID;
my $sql = "UPDATE analysis_job SET status='CLAIMED', worker_id=? WHERE analysis_job_id=? AND status='READY'"; my $sql = "UPDATE job SET status='CLAIMED', worker_id=? WHERE job_id=? AND status='READY'";
my $sth = $self->prepare($sql); my $sth = $self->prepare($sql);
$sth->execute($worker_id, $job_id); $sth->execute($worker_id, $job_id);
$sth->finish; $sth->finish;
my $constraint = "a.analysis_job_id='$job_id' AND a.worker_id='$worker_id' AND a.status='CLAIMED'"; my $constraint = "j.job_id='$job_id' AND j.worker_id='$worker_id' AND j.status='CLAIMED'";
return $self->_generic_fetch($constraint); return $self->_generic_fetch($constraint);
} }
...@@ -549,20 +549,20 @@ sub release_undone_jobs_from_worker { ...@@ -549,20 +549,20 @@ sub release_undone_jobs_from_worker {
my ($self, $worker, $msg) = @_; my ($self, $worker, $msg) = @_;
my $max_retry_count = $worker->analysis->stats->max_retry_count(); my $max_retry_count = $worker->analysis->stats->max_retry_count();
my $worker_id = $worker->worker_id(); my $worker_id = $worker->dbID();
#first just reset the claimed jobs, these don't need a retry_count index increment: #first just reset the claimed jobs, these don't need a retry_count index increment:
# (previous worker_id does not matter, because that worker has never had a chance to run the job) # (previous worker_id does not matter, because that worker has never had a chance to run the job)
$self->dbc->do( qq{ $self->dbc->do( qq{
UPDATE analysis_job UPDATE job
SET status='READY', worker_id=NULL SET status='READY', worker_id=NULL
WHERE status='CLAIMED' WHERE status='CLAIMED'
AND worker_id='$worker_id' AND worker_id='$worker_id'
} ); } );
my $sth = $self->prepare( qq{ my $sth = $self->prepare( qq{
SELECT analysis_job_id SELECT job_id
FROM analysis_job FROM job
WHERE worker_id='$worker_id' WHERE worker_id='$worker_id'
AND status in ('COMPILATION','GET_INPUT','RUN','WRITE_OUTPUT') AND status in ('COMPILATION','GET_INPUT','RUN','WRITE_OUTPUT')
} ); } );
...@@ -606,9 +606,9 @@ sub release_and_age_job { ...@@ -606,9 +606,9 @@ sub release_and_age_job {
# FIXME: would it be possible to retain worker_id for READY jobs in order to temporarily keep track of the previous (failed) worker? # FIXME: would it be possible to retain worker_id for READY jobs in order to temporarily keep track of the previous (failed) worker?
# #
$self->dbc->do( qq{ $self->dbc->do( qq{
UPDATE analysis_job UPDATE job
SET status=IF( $may_retry AND (retry_count<$max_retry_count), 'READY', 'FAILED'), retry_count=retry_count+1 SET status=IF( $may_retry AND (retry_count<$max_retry_count), 'READY', 'FAILED'), retry_count=retry_count+1
WHERE analysis_job_id=$job_id WHERE job_id=$job_id
AND status in ('COMPILATION','GET_INPUT','RUN','WRITE_OUTPUT') AND status in ('COMPILATION','GET_INPUT','RUN','WRITE_OUTPUT')
} ); } );
} }
...@@ -655,9 +655,9 @@ sub reset_job_by_dbID { ...@@ -655,9 +655,9 @@ sub reset_job_by_dbID {
my $job_id = shift or throw("job_id of the job to be reset is undefined"); my $job_id = shift or throw("job_id of the job to be reset is undefined");
$self->dbc->do( qq{ $self->dbc->do( qq{
UPDATE analysis_job UPDATE job
SET status='READY', retry_count=0 SET status='READY', retry_count=0
WHERE analysis_job_id=$job_id WHERE job_id=$job_id
} ); } );
} }
...@@ -686,7 +686,7 @@ sub reset_all_jobs_for_analysis_id { ...@@ -686,7 +686,7 @@ sub reset_all_jobs_for_analysis_id {
throw("must define analysis_id") unless($analysis_id); throw("must define analysis_id") unless($analysis_id);
my ($sql, $sth); my ($sql, $sth);
$sql = "UPDATE analysis_job SET status='READY' WHERE status!='BLOCKED' and analysis_id=?"; $sql = "UPDATE job SET status='READY' WHERE status!='BLOCKED' and analysis_id=?";
$sth = $self->prepare($sql); $sth = $self->prepare($sql);
$sth->execute($analysis_id); $sth->execute($analysis_id);
$sth->finish; $sth->finish;
...@@ -717,13 +717,13 @@ sub remove_analysis_id { ...@@ -717,13 +717,13 @@ sub remove_analysis_id {
$self->dbc->do($sql); $self->dbc->do($sql);
$sql = "ANALYZE TABLE analysis_stats"; $sql = "ANALYZE TABLE analysis_stats";
$self->dbc->do($sql); $self->dbc->do($sql);
$sql = "DELETE FROM analysis_job WHERE analysis_id=$analysis_id"; $sql = "DELETE FROM job WHERE analysis_id=$analysis_id";
$self->dbc->do($sql); $self->dbc->do($sql);
$sql = "ANALYZE TABLE analysis_job"; $sql = "ANALYZE TABLE job";
$self->dbc->do($sql); $self->dbc->do($sql);
$sql = "DELETE FROM hive WHERE analysis_id=$analysis_id"; $sql = "DELETE FROM worker WHERE analysis_id=$analysis_id";
$self->dbc->do($sql); $self->dbc->do($sql);
$sql = "ANALYZE TABLE hive"; $sql = "ANALYZE TABLE worker";
$self->dbc->do($sql); $self->dbc->do($sql);
} }
......
...@@ -139,7 +139,7 @@ sub refresh { ...@@ -139,7 +139,7 @@ sub refresh {
sub get_running_worker_count { sub get_running_worker_count {
my ($self, $stats) = @_; my ($self, $stats) = @_;
my $sql = "SELECT count(*) FROM hive WHERE cause_of_death='' and analysis_id=?"; my $sql = "SELECT count(*) FROM worker WHERE cause_of_death='' and analysis_id=?";
my $sth = $self->prepare($sql); my $sth = $self->prepare($sql);
$sth->execute($stats->analysis_id); $sth->execute($stats->analysis_id);
my ($liveCount) = $sth->fetchrow_array(); my ($liveCount) = $sth->fetchrow_array();
......
...@@ -36,17 +36,9 @@ Bio::EnsEMBL::Hive::DBSQL::DBAdaptor ...@@ -36,17 +36,9 @@ Bio::EnsEMBL::Hive::DBSQL::DBAdaptor
package Bio::EnsEMBL::Hive::DBSQL::DBAdaptor; package Bio::EnsEMBL::Hive::DBSQL::DBAdaptor;
use strict; use strict;
use Bio::EnsEMBL::DBSQL::DBConnection;
use base ('Bio::EnsEMBL::DBSQL::DBAdaptor'); use base ('Bio::EnsEMBL::DBSQL::DBAdaptor');
#sub get_Queen {
# my $self = shift;
#
# return $self->get_QueenAdaptor();
#}
sub get_available_adaptors { sub get_available_adaptors {
my %pairs = ( my %pairs = (
......
...@@ -32,9 +32,9 @@ sub register_message { ...@@ -32,9 +32,9 @@ sub register_message {
# (the timestamp 'moment' column will be set automatically) # (the timestamp 'moment' column will be set automatically)
my $sql = qq{ my $sql = qq{
REPLACE INTO job_message (analysis_job_id, worker_id, analysis_id, retry_count, status, msg, is_error) REPLACE INTO job_message (job_id, worker_id, analysis_id, retry_count, status, msg, is_error)
SELECT analysis_job_id, worker_id, analysis_id, retry_count, status, ?, ? SELECT job_id, worker_id, analysis_id, retry_count, status, ?, ?
FROM analysis_job WHERE analysis_job_id=? FROM job WHERE job_id=?
}; };
my $sth = $self->prepare( $sql ); my $sth = $self->prepare( $sql );
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
=head1 DESCRIPTION =head1 DESCRIPTION
This module together with its data container are used to enable dataflow into arbitrary tables (rather than just analysis_job). This module together with its data container are used to enable dataflow into arbitrary tables (rather than just job).
NakedTable objects know *where* to dataflow, and NakedTableAdaptor knows *how* to dataflow. NakedTable objects know *where* to dataflow, and NakedTableAdaptor knows *how* to dataflow.
......
...@@ -78,7 +78,7 @@ sub generate_job_name { ...@@ -78,7 +78,7 @@ sub generate_job_name {
sub responsible_for_worker { sub responsible_for_worker {
my ($self, $worker) = @_; my ($self, $worker) = @_;
return $worker->beekeeper() eq $self->type(); return $worker->meadow_type() eq $self->type();
} }
sub check_worker_is_alive_and_mine { sub check_worker_is_alive_and_mine {
......
...@@ -37,11 +37,11 @@ By inheriting from this module you make your module able to deal with parameters ...@@ -37,11 +37,11 @@ By inheriting from this module you make your module able to deal with parameters
=head1 DESCRIPTION =head1 DESCRIPTION
Most of Compara RunnableDB methods work under assumption