Queen.pm 30 KB
Newer Older
Jessica Severin's avatar
Jessica Severin committed
1 2 3
=pod 

=head1 NAME
4

5 6 7
    Bio::EnsEMBL::Hive::Queen

=head1 DESCRIPTION
Jessica Severin's avatar
Jessica Severin committed
8

9 10 11 12
    The Queen of the Hive based job control system is responsible to 'birthing' the
    correct number of workers of the right type so that they can find jobs to do.
    It will also free up jobs of Workers that died unexpectantly so that other workers
    can claim them to do.
13

14 15 16 17
    Hive based processing is a concept based on a more controlled version
    of an autonomous agent type system.  Each worker is not told what to do
    (like a centralized control system - like the current pipeline system)
    but rather queries a central database for jobs (give me jobs).
Jessica Severin's avatar
Jessica Severin committed
18

19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
    Each worker is linked to an analysis_id, registers its self on creation
    into the Hive, creates a RunnableDB instance of the Analysis->module,
    gets $analysis->stats->batch_size jobs from the job table, does its work,
    creates the next layer of job entries by interfacing to
    the DataflowRuleAdaptor to determine the analyses it needs to pass its
    output data to and creates jobs on the next analysis database.
    It repeats this cycle until it has lived its lifetime or until there are no
    more jobs left.
    The lifetime limit is just a safety limit to prevent these from 'infecting'
    a system.

    The Queens job is to simply birth Workers of the correct analysis_id to get the
    work down.  The only other thing the Queen does is free up jobs that were
    claimed by Workers that died unexpectantly so that other workers can take
    over the work.
34

35 36 37 38 39 40 41 42
    The Beekeeper is in charge of interfacing between the Queen and a compute resource
    or 'compute farm'.  Its job is to query Queens if they need any workers and to
    send the requested number of workers to open machines via the runWorker.pl script.
    It is also responsible for interfacing with the Queen to identify worker which died
    unexpectantly.

=head1 LICENSE

43
    Copyright [1999-2014] Wellcome Trust Sanger Institute and the EMBL-European Bioinformatics Institute
44 45 46 47 48 49 50 51 52

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
    You may obtain a copy of the License at

         http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License
    is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and limitations under the License.
Jessica Severin's avatar
Jessica Severin committed
53 54

=head1 CONTACT
55

56
    Please subscribe to the Hive mailing list:  http://listserver.ebi.ac.uk/mailman/listinfo/ehive-users  to discuss Hive-related questions or to be notified of our updates
Jessica Severin's avatar
Jessica Severin committed
57 58

=head1 APPENDIX
59

60 61
    The rest of the documentation details each of the object methods. 
    Internal methods are usually preceded with a _
Jessica Severin's avatar
Jessica Severin committed
62 63 64

=cut

65

Jessica Severin's avatar
Jessica Severin committed
66 67 68
package Bio::EnsEMBL::Hive::Queen;

use strict;
69
use POSIX;
70
use File::Path 'make_path';
71
use List::Util 'sum';
72

73
use Bio::EnsEMBL::Hive::Utils ('destringify', 'dir_revhash');  # NB: needed by invisible code
74
use Bio::EnsEMBL::Hive::AnalysisJob;
Jessica Severin's avatar
Jessica Severin committed
75
use Bio::EnsEMBL::Hive::Worker;
76
use Bio::EnsEMBL::Hive::Scheduler;
Jessica Severin's avatar
Jessica Severin committed
77

78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
use base ('Bio::EnsEMBL::Hive::DBSQL::ObjectAdaptor');


sub default_table_name {
    return 'worker';
}


sub default_insertion_method {
    return 'INSERT';
}


sub object_class {
    return 'Bio::EnsEMBL::Hive::Worker';
}
94

Jessica Severin's avatar
Jessica Severin committed
95

96
############################
Jessica Severin's avatar
Jessica Severin committed
97
#
98
# PUBLIC API
Jessica Severin's avatar
Jessica Severin committed
99
#
100
############################
Jessica Severin's avatar
Jessica Severin committed
101

102

Jessica Severin's avatar
Jessica Severin committed
103 104
=head2 create_new_worker

105 106 107 108
  Description: Creates an entry in the worker table,
               populates some non-storable attributes
               and returns a Worker object based on that insert.
               This guarantees that each worker registered in this Queen's hive is properly registered.
Jessica Severin's avatar
Jessica Severin committed
109
  Returntype : Bio::EnsEMBL::Hive::Worker
110
  Caller     : runWorker.pl
Jessica Severin's avatar
Jessica Severin committed
111 112 113 114

=cut

sub create_new_worker {
115 116
    my $self    = shift @_;
    my %flags   = @_;
117

118
    my ($meadow_type, $meadow_name, $process_id, $exec_host, $resource_class_id, $resource_class_name,
119 120 121
        $no_write, $debug, $worker_log_dir, $hive_log_dir, $job_limit, $life_span, $no_cleanup, $retry_throwing_jobs, $can_respecialize)
     = @flags{qw(-meadow_type -meadow_name -process_id -exec_host -resource_class_id -resource_class_name
            -no_write -debug -worker_log_dir -hive_log_dir -job_limit -life_span -no_cleanup -retry_throwing_jobs -can_respecialize)};
122

123 124 125 126 127 128 129 130 131 132 133 134
    foreach my $prev_worker_incarnation (@{ $self->fetch_all( "status!='DEAD' AND meadow_type='$meadow_type' AND meadow_name='$meadow_name' AND process_id='$process_id'" ) }) {
            # so far 'RELOCATED events' has been detected on LSF 9.0 in response to sending signal #99 or #100
            # Since I don't know how to avoid them, I am trying to register them when they happen.
            # The following snippet buries the previous incarnation of the Worker before starting a new one.
            #
            # FIXME: if GarabageCollector (beekeeper -dead) gets to these processes first, it will register them as DEAD/UNKNOWN.
            #       LSF 9.0 does not report "rescheduling" events in the output of 'bacct', but does mention them in 'bhist'.
            #       So parsing 'bhist' output would probably yield the most accurate & confident registration of these events.
        $prev_worker_incarnation->cause_of_death( 'RELOCATED' );
        $self->register_worker_death( $prev_worker_incarnation );
    }

135
    if( defined($resource_class_name) ) {
136 137 138 139 140 141 142 143 144 145 146 147
        my $rc = $self->db->get_ResourceClassAdaptor->fetch_by_name($resource_class_name)
            or die "resource_class with name='$resource_class_name' could not be fetched from the database";

        $resource_class_id = $rc->dbID;
    }

    my $sql = q{INSERT INTO worker (born, last_check_in, meadow_type, meadow_name, host, process_id, resource_class_id)
              VALUES (CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, ?, ?, ?, ?, ?)};

    my $sth = $self->prepare($sql);
    $sth->execute($meadow_type, $meadow_name, $exec_host, $process_id, $resource_class_id);
    my $worker_id = $self->dbc->db_handle->last_insert_id(undef, undef, 'worker', 'worker_id')
148
        or die "Could not insert a new worker";
149 150
    $sth->finish;

151 152 153
    my $worker = $self->fetch_by_dbID($worker_id)
        or die "Could not fetch worker with dbID=$worker_id";

154 155 156 157
    if($hive_log_dir or $worker_log_dir) {
        my $dir_revhash = dir_revhash($worker_id);
        $worker_log_dir ||= $hive_log_dir .'/'. ($dir_revhash ? "$dir_revhash/" : '') .'worker_id_'.$worker_id;

158 159
        eval {
            make_path( $worker_log_dir );
160 161
            1;
        } or die "Could not create '$worker_log_dir' directory : $@";
162

163 164
        $worker->log_dir( $worker_log_dir );
        $self->update_log_dir( $worker );   # autoloaded
165 166 167 168
    }

    $worker->init;

169 170
    if(defined($job_limit)) {
      $worker->job_limiter($job_limit);
171 172 173 174 175 176 177 178 179 180 181 182 183
      $worker->life_span(0);
    }

    $worker->life_span($life_span * 60)                 if($life_span);

    $worker->execute_writes(0)                          if($no_write);

    $worker->perform_cleanup(0)                         if($no_cleanup);

    $worker->debug($debug)                              if($debug);

    $worker->retry_throwing_jobs($retry_throwing_jobs)  if(defined $retry_throwing_jobs);

184 185
    $worker->can_respecialize($can_respecialize)        if(defined $can_respecialize);

186 187 188 189 190
    return $worker;
}


=head2 specialize_new_worker
191

192 193
  Description: If analysis_id or logic_name is specified it will try to specialize the Worker into this analysis.
               If not specified the Queen will analyze the hive and pick the most suitable analysis.
194
  Caller     : Bio::EnsEMBL::Hive::Worker
195 196 197 198

=cut

sub specialize_new_worker {
199 200 201
    my $self    = shift @_;
    my $worker  = shift @_;
    my %flags   = @_;
202

203 204
    my ($analysis_id, $logic_name, $job_id, $force)
     = @flags{qw(-analysis_id -logic_name -job_id -force)};
205

206 207
    if( scalar( grep {defined($_)} ($analysis_id, $logic_name, $job_id) ) > 1) {
        die "At most one of the options {-analysis_id, -logic_name, -job_id} can be set to pre-specialize a Worker";
Leo Gordon's avatar
Leo Gordon committed
208 209
    }

210
    my ($analysis, $stats, $special_batch);
211
    my $analysis_stats_adaptor = $self->db->get_AnalysisStatsAdaptor;
Jessica Severin's avatar
Jessica Severin committed
212

213
    if($job_id or $analysis_id or $logic_name) {    # probably pre-specialized from command-line
214

215
        if($job_id) {
216
            warn "resetting and fetching job for job_id '$job_id'\n";
217 218

            my $job_adaptor = $self->db->get_AnalysisJobAdaptor;
219

220 221 222 223 224 225 226 227 228
            my $job = $job_adaptor->fetch_by_dbID( $job_id )
                or die "Could not fetch job with dbID='$job_id'";
            my $job_status = $job->status();

            if($job_status =~/(CLAIMED|PRE_CLEANUP|FETCH_INPUT|RUN|WRITE_OUTPUT|POST_CLEANUP)/ ) {
                die "Job with dbID='$job_id' is already in progress, cannot run";   # FIXME: try GC first, then complain
            } elsif($job_status =~/(DONE|SEMAPHORED)/ and !$force) {
                die "Job with dbID='$job_id' is $job_status, please use -force 1 to override";
            }
229

230 231 232 233 234 235 236 237
            if(($job_status eq 'DONE') and $job->semaphored_job_id) {
                warn "Increasing the semaphore count of the dependent job";
                $job_adaptor->increase_semaphore_count_for_jobid( $job->semaphored_job_id );
            }

            my $worker_id = $worker->dbID;
            if($job = $job_adaptor->reset_or_grab_job_by_dbID($job_id, $worker_id)) {
                $special_batch = [ $job ];
238 239
                $analysis_id = $job->analysis_id;
            } else {
240
                die "Could not claim job with dbID='$job_id' for worker with dbID='$worker_id'";
241
            }
242 243 244 245 246 247 248 249 250 251 252 253 254
        }

        if($logic_name) {
            $analysis = $self->db->get_AnalysisAdaptor->fetch_by_logic_name($logic_name)
                or die "analysis with name='$logic_name' could not be fetched from the database";

            $analysis_id = $analysis->dbID;

        } elsif($analysis_id) {
            $analysis = $self->db->get_AnalysisAdaptor->fetch_by_dbID($analysis_id)
                or die "analysis with dbID='$analysis_id' could not be fetched from the database";
        }

255 256 257
        if( $worker->resource_class_id
        and $worker->resource_class_id != $analysis->resource_class_id) {
                die "resource_class of analysis ".$analysis->logic_name." is incompatible with this Worker's resource_class";
258 259
        }

260
        $stats = $analysis->stats;
261 262
        $self->safe_synchronize_AnalysisStats($stats);

263
        unless($special_batch or $force) {    # do we really need to run this analysis?
264
            if($self->get_hive_current_load() >= 1.1) {
265
                $worker->cause_of_death('HIVE_OVERLOAD');
266
                die "Hive is overloaded, can't specialize a worker";
267 268
            }
            if($stats->status eq 'BLOCKED') {
269
                die "Analysis is BLOCKED, can't specialize a worker";
270
            }
271
            if($stats->num_required_workers <= 0) {
272
                die "Analysis doesn't require extra workers at the moment";
273 274
            }
            if($stats->status eq 'DONE') {
275
                die "Analysis is DONE, and doesn't require workers";
276 277
            }
        }
278
            # probably scheduled by beekeeper.pl:
279
    } elsif( $stats = Bio::EnsEMBL::Hive::Scheduler::suggest_analysis_to_specialize_by_rc_id_meadow_type($self, $worker->resource_class_id, $worker->meadow_type) ) {
280

281
        $worker->analysis( undef ); # make sure we reset anything that was there before
282
        $analysis_id = $stats->analysis_id;
283 284 285
    } else {
        $worker->cause_of_death('NO_ROLE');
        die "No analysis suitable for the worker was found\n";
286
    }
287

288
        # now set it in the $worker:
Jessica Severin's avatar
Jessica Severin committed
289

290
    $worker->analysis_id( $analysis_id );
Jessica Severin's avatar
Jessica Severin committed
291

292
    $self->update_analysis_id( $worker );   # autoloaded
293

294 295
    if($special_batch) {
        $worker->special_batch( $special_batch );
296
    } else {    # count it as autonomous worker sharing the load of that analysis:
297

298
        $analysis_stats_adaptor->update_status($analysis_id, 'WORKING');
299

300
        $analysis_stats_adaptor->decrease_required_workers($worker->analysis_id);
301 302 303 304 305 306 307 308 309
    }

        # The following increment used to be done only when no specific task was given to the worker,
        # thereby excluding such "special task" workers from being counted in num_running_workers.
        #
        # However this may be tricky to emulate by triggers that know nothing about "special tasks",
        # so I am (temporarily?) simplifying the accounting algorithm.
        #
    unless( $self->db->hive_use_triggers() ) {
310
        $analysis_stats_adaptor->increase_running_workers($worker->analysis_id);
311
    }
Jessica Severin's avatar
Jessica Severin committed
312 313
}

314

Jessica Severin's avatar
Jessica Severin committed
315
sub register_worker_death {
316
    my ($self, $worker, $self_burial) = @_;
317

318 319 320 321
    return unless($worker);

    my $cod = $worker->cause_of_death() || 'UNKNOWN';    # make sure we do not attempt to insert a void

322
                    # FIXME: make it possible to set the 'died' timestamp if we have detected it from logs:
323
    my $sql = qq{UPDATE worker SET died=CURRENT_TIMESTAMP
324
    } . ( $self_burial ? ',last_check_in=CURRENT_TIMESTAMP ' : '') . qq{
325 326 327 328 329 330 331 332 333 334 335 336 337
                    ,status='DEAD'
                    ,work_done='}. $worker->work_done . qq{'
                    ,cause_of_death='$cod'
                WHERE worker_id='}. $worker->dbID . qq{'};
    $self->dbc->do( $sql );

    if(my $analysis_id = $worker->analysis_id) {
        my $analysis_stats_adaptor = $self->db->get_AnalysisStatsAdaptor;

        unless( $self->db->hive_use_triggers() ) {
            $analysis_stats_adaptor->decrease_running_workers($worker->analysis_id);
        }

338 339 340 341 342
        unless( $cod eq 'NO_WORK'
            or  $cod eq 'JOB_LIMIT'
            or  $cod eq 'HIVE_OVERLOAD'
            or  $cod eq 'LIFESPAN'
        ) {
343 344 345 346 347 348 349 350 351 352
                $self->db->get_AnalysisJobAdaptor->release_undone_jobs_from_worker($worker);
        }

            # re-sync the analysis_stats when a worker dies as part of dynamic sync system
        if($self->safe_synchronize_AnalysisStats($worker->analysis->stats)->status ne 'DONE') {
            # since I'm dying I should make sure there is someone to take my place after I'm gone ...
            # above synch still sees me as a 'living worker' so I need to compensate for that
            $analysis_stats_adaptor->increase_required_workers($worker->analysis_id);
        }
    }
Jessica Severin's avatar
Jessica Severin committed
353 354
}

355

356
sub check_for_dead_workers {    # scans the whole Valley for lost Workers (but ignores unreachagle ones)
357
    my ($self, $valley, $check_buried_in_haste) = @_;
Leo Gordon's avatar
Leo Gordon committed
358

359
    warn "GarbageCollector:\tChecking for lost Workers...\n";
360

361 362
    my $last_few_seconds            = 5;    # FIXME: It is probably a good idea to expose this parameter for easier tuning.
    my $queen_overdue_workers       = $self->fetch_overdue_workers( $last_few_seconds );    # check the workers we have not seen active during the $last_few_seconds
363 364 365
    my %mt_and_pid_to_worker_status = ();
    my %worker_status_counts        = ();
    my %mt_and_pid_to_lost_worker   = ();
Leo Gordon's avatar
Leo Gordon committed
366

367
    warn "GarbageCollector:\t[Queen:] out of ".scalar(@$queen_overdue_workers)." Workers that haven't checked in during the last $last_few_seconds seconds...\n";
368

369
    foreach my $worker (@$queen_overdue_workers) {
Leo Gordon's avatar
Leo Gordon committed
370

371 372 373
        my $meadow_type = $worker->meadow_type;
        if(my $meadow = $valley->find_available_meadow_responsible_for_worker($worker)) {
            $mt_and_pid_to_worker_status{$meadow_type} ||= $meadow->status_of_all_our_workers;
Leo Gordon's avatar
Leo Gordon committed
374
        } else {
375
            $worker_status_counts{$meadow_type}{'UNREACHABLE'}++;
376

377
            next;   # Worker is unreachable from this Valley
Leo Gordon's avatar
Leo Gordon committed
378 379
        }

380 381 382 383 384
        my $process_id = $worker->process_id;
        if(my $status = $mt_and_pid_to_worker_status{$meadow_type}{$process_id}) { # can be RUN|PEND|xSUSP
            $worker_status_counts{$meadow_type}{$status}++;
        } else {
            $worker_status_counts{$meadow_type}{'LOST'}++;
385

386
            $mt_and_pid_to_lost_worker{$meadow_type}{$process_id} = $worker;
387
        }
388 389 390 391 392 393
    }

        # just a quick summary report:
    foreach my $meadow_type (keys %worker_status_counts) {
        warn "GarbageCollector:\t[$meadow_type Meadow:]\t".join(', ', map { "$_:$worker_status_counts{$meadow_type}{$_}" } keys %{$worker_status_counts{$meadow_type}})."\n\n";
    }
394

395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411
    while(my ($meadow_type, $pid_to_lost_worker) = each %mt_and_pid_to_lost_worker) {
        my $this_meadow = $valley->available_meadow_hash->{$meadow_type};

        if(my $lost_this_meadow = scalar(keys %$pid_to_lost_worker) ) {
            warn "GarbageCollector:\tDiscovered $lost_this_meadow lost $meadow_type Workers\n";

            my $wpid_to_cod = {};
            if($this_meadow->can('find_out_causes')) {
                $wpid_to_cod = $this_meadow->find_out_causes( keys %$pid_to_lost_worker );
                my $lost_with_known_cod = scalar(keys %$wpid_to_cod);
                warn "GarbageCollector:\tFound why $lost_with_known_cod of $meadow_type Workers died\n";
            } else {
                warn "GarbageCollector:\t$meadow_type meadow does not support post-mortem examination\n";
            }

            warn "GarbageCollector:\tReleasing the jobs\n";
            while(my ($process_id, $worker) = each %$pid_to_lost_worker) {
412
                $worker->cause_of_death( $wpid_to_cod->{$process_id} || 'UNKNOWN');
413 414
                $self->register_worker_death($worker);
            }
415 416 417
        }
    }

418
        # the following bit is completely Meadow-agnostic and only restores database integrity:
Leo Gordon's avatar
Leo Gordon committed
419
    if($check_buried_in_haste) {
420
        warn "GarbageCollector:\tChecking for Workers buried in haste...\n";
421
        my $buried_in_haste_list = $self->fetch_all_dead_workers_with_jobs();
Leo Gordon's avatar
Leo Gordon committed
422
        if(my $bih_number = scalar(@$buried_in_haste_list)) {
423
            warn "GarbageCollector:\tfound $bih_number jobs, reclaiming.\n\n";
Leo Gordon's avatar
Leo Gordon committed
424 425 426
            if($bih_number) {
                my $job_adaptor = $self->db->get_AnalysisJobAdaptor();
                foreach my $worker (@$buried_in_haste_list) {
Leo Gordon's avatar
Leo Gordon committed
427
                    $job_adaptor->release_undone_jobs_from_worker($worker);
Leo Gordon's avatar
Leo Gordon committed
428 429 430
                }
            }
        } else {
431
            warn "GarbageCollector:\tfound none\n";
Leo Gordon's avatar
Leo Gordon committed
432 433 434
        }
    }
}
Jessica Severin's avatar
Jessica Severin committed
435 436


437 438 439
    # a new version that both checks in and updates the status
sub check_in_worker {
    my ($self, $worker) = @_;
Jessica Severin's avatar
Jessica Severin committed
440

441
    $self->dbc->do("UPDATE worker SET last_check_in=CURRENT_TIMESTAMP, status='".$worker->status."', work_done='".$worker->work_done."' WHERE worker_id='".$worker->dbID."'");
Jessica Severin's avatar
Jessica Severin committed
442 443 444
}


445
=head2 reset_job_by_dbID_and_sync
446

447
  Arg [1]: int $job_id
448
  Example: 
449
    my $job = $queen->reset_job_by_dbID_and_sync($job_id);
450
  Description: 
451
    For the specified job_id it will fetch just that job, 
452 453 454
    reset it completely as if it has never run, and return it.  
    Specifying a specific job bypasses the safety checks, 
    thus multiple workers could be running the 
455
    same job simultaneously (use only for debugging).
456
  Returntype : none
457
  Exceptions :
458
  Caller     : beekeeper.pl
459 460 461

=cut

462 463
sub reset_job_by_dbID_and_sync {
    my ($self, $job_id) = @_;
464

465 466 467
    my $job     = $self->db->get_AnalysisJobAdaptor->reset_or_grab_job_by_dbID($job_id);

    my $stats   = $job->analysis->stats;
468 469

    $self->synchronize_AnalysisStats($stats);
470 471 472
}


473 474 475 476 477 478
######################################
#
# Public API interface for beekeeper
#
######################################

479

480 481 482
    # Note: asking for Queen->fetch_overdue_workers(0) essentially means
    #       "fetch all workers known to the Queen not to be officially dead"
    #
483
sub fetch_overdue_workers {
484
    my ($self,$overdue_secs) = @_;
485

486
    $overdue_secs = 3600 unless(defined($overdue_secs));
487

488 489 490 491 492 493
    my $constraint = "status!='DEAD' AND ".{
            'mysql'     =>  "(UNIX_TIMESTAMP()-UNIX_TIMESTAMP(last_check_in)) > $overdue_secs",
            'sqlite'    =>  "(strftime('%s','now')-strftime('%s',last_check_in)) > $overdue_secs",
            'pgsql'     =>  "EXTRACT(EPOCH FROM CURRENT_TIMESTAMP - last_check_in) > $overdue_secs",
        }->{ $self->dbc->driver };

494
    return $self->fetch_all( $constraint );
495 496
}

497 498 499

sub fetch_all_dead_workers_with_jobs {
    my $self = shift;
Leo Gordon's avatar
Leo Gordon committed
500

501
    return $self->fetch_all( "JOIN job j USING(worker_id) WHERE worker.status='DEAD' AND j.status NOT IN ('DONE', 'READY', 'FAILED', 'PASSED_ON') GROUP BY worker_id" );
Leo Gordon's avatar
Leo Gordon committed
502
}
Jessica Severin's avatar
Jessica Severin committed
503

504

505 506
=head2 synchronize_hive

Leo Gordon's avatar
Leo Gordon committed
507
  Arg [1]    : $filter_analysis (optional)
508 509
  Example    : $queen->synchronize_hive();
  Description: Runs through all analyses in the system and synchronizes
510 511
              the analysis_stats summary with the states in the job 
              and worker tables.  Then follows by checking all the blocking rules
512 513 514 515 516 517 518
              and blocks/unblocks analyses as needed.
  Exceptions : none
  Caller     : general

=cut

sub synchronize_hive {
Leo Gordon's avatar
Leo Gordon committed
519
  my $self          = shift;
Leo Gordon's avatar
Leo Gordon committed
520
  my $filter_analysis = shift; # optional parameter
521

522
  my $start_time = time();
523

Leo Gordon's avatar
Leo Gordon committed
524
  my $list_of_analyses = $filter_analysis ? [$filter_analysis] : $self->db->get_AnalysisAdaptor->fetch_all;
Leo Gordon's avatar
Leo Gordon committed
525

Leo Gordon's avatar
Leo Gordon committed
526
  print STDERR "\nSynchronizing the hive (".scalar(@$list_of_analyses)." analyses this time):\n";
Leo Gordon's avatar
Leo Gordon committed
527 528
  foreach my $analysis (@$list_of_analyses) {
    $self->synchronize_AnalysisStats($analysis->stats);
529
    print STDERR ( ($analysis->stats()->status eq 'BLOCKED') ? 'x' : 'o');
530
  }
Leo Gordon's avatar
Leo Gordon committed
531
  print STDERR "\n";
Leo Gordon's avatar
Leo Gordon committed
532

Leo Gordon's avatar
Leo Gordon committed
533
  print STDERR ''.((time() - $start_time))." seconds to synchronize_hive\n\n";
534
}
535

536

537 538 539
=head2 safe_synchronize_AnalysisStats

  Arg [1]    : Bio::EnsEMBL::Hive::AnalysisStats object
540
  Example    : $self->safe_synchronize_AnalysisStats($stats);
541 542 543 544 545 546 547 548 549
  Description: Prewrapper around synchronize_AnalysisStats that does
               checks and grabs sync_lock before proceeding with sync.
               Used by distributed worker sync system to avoid contention.
  Exceptions : none
  Caller     : general

=cut

sub safe_synchronize_AnalysisStats {
550 551 552 553 554 555 556 557 558 559
    my ($self, $stats) = @_;

    my $max_refresh_attempts = 5;
    while($stats->sync_lock and $max_refresh_attempts--) {   # another Worker/Beekeeper is synching this analysis right now
        sleep(1);
        $stats->refresh();  # just try to avoid collision
    }

    return $stats if($stats->status eq 'DONE');
    return $stats if(($stats->status eq 'WORKING') and
560
                   ($stats->seconds_since_last_update < 3*60));
561

562 563 564 565 566
        # try to claim the sync_lock
    my $sql = "UPDATE analysis_stats SET status='SYNCHING', sync_lock=1 ".
              "WHERE sync_lock=0 and analysis_id=" . $stats->analysis_id;
    my $row_count = $self->dbc->do($sql);  
    return $stats unless($row_count == 1);        # return the un-updated status if locked
567
  
568 569
        # if we managed to obtain the lock, let's go and perform the sync:
    $self->synchronize_AnalysisStats($stats);
570

571
    return $stats;
572 573 574
}


575
=head2 synchronize_AnalysisStats
576

577 578
  Arg [1]    : Bio::EnsEMBL::Hive::AnalysisStats object
  Example    : $self->synchronize($analysisStats);
579
  Description: Queries the job and worker tables to get summary counts
580 581 582 583 584
               and rebuilds the AnalysisStats object.  Then updates the
               analysis_stats table with the new summary info
  Returntype : newly synced Bio::EnsEMBL::Hive::AnalysisStats object
  Exceptions : none
  Caller     : general
585

586
=cut
587

588
sub synchronize_AnalysisStats {
589 590 591 592 593 594 595 596 597 598 599
    my $self = shift;
    my $analysisStats = shift;

    return $analysisStats unless($analysisStats);
    return $analysisStats unless($analysisStats->analysis_id);

    $analysisStats->refresh(); ## Need to get the new hive_capacity for dynamic analyses

    unless($self->db->hive_use_triggers()) {

        my $job_counts = $self->db->get_AnalysisJobAdaptor->fetch_job_counts_hashed_by_status( $analysisStats->analysis_id );
600

601 602 603 604 605
        $analysisStats->semaphored_job_count( $job_counts->{'SEMAPHORED'} || 0 );
        $analysisStats->ready_job_count(      $job_counts->{'READY'} || 0 );
        $analysisStats->failed_job_count(     $job_counts->{'FAILED'} || 0 );
        $analysisStats->done_job_count(       $job_counts->{'DONE'} + $job_counts->{'PASSED_ON'} || 0 ); # done here or potentially done elsewhere
        $analysisStats->total_job_count(      sum( values %$job_counts ) || 0 );
606

607
    } # unless($self->db->hive_use_triggers())
608

609
        # compute the number of total required workers for this analysis (taking into account the jobs that are already running)
610
    my $analysis              = $analysisStats->analysis();
611 612 613
    my $scheduling_allowed    =  ( !defined( $analysisStats->hive_capacity ) or $analysisStats->hive_capacity )
                              && ( !defined( $analysis->analysis_capacity  ) or $analysis->analysis_capacity  );
    my $required_workers    = $scheduling_allowed
614
                            && POSIX::ceil( $analysisStats->ready_job_count() / $analysisStats->get_or_estimate_batch_size() );
615 616 617 618 619
    $analysisStats->num_required_workers( $required_workers );


    $analysisStats->check_blocking_control_rules();

620
    $analysisStats->determine_status();
621

622 623
    # $analysisStats->sync_lock(0); ## do we perhaps need it here?
    $analysisStats->update;  #update and release sync_lock
624

625
    return $analysisStats;
626
}
627 628


629 630 631 632 633 634 635 636 637 638 639 640 641 642
=head2 get_num_failed_analyses

  Arg [1]    : Bio::EnsEMBL::Hive::AnalysisStats object (optional)
  Example    : if( $self->get_num_failed_analyses( $my_analysis )) { do_something; }
  Example    : my $num_failed_analyses = $self->get_num_failed_analyses();
  Description: Reports all failed analyses and returns
                either the number of total failed (if no $filter_analysis was provided)
                or 1/0, depending on whether $filter_analysis failed or not.
  Returntype : int
  Exceptions : none
  Caller     : general

=cut

Leo Gordon's avatar
Leo Gordon committed
643
sub get_num_failed_analyses {
644
    my ($self, $filter_analysis) = @_;
645

646 647 648 649 650
    my $failed_analyses = $self->db->get_AnalysisAdaptor->fetch_all_failed_analyses();

    my $filter_analysis_failed = 0;

    foreach my $failed_analysis (@$failed_analyses) {
651 652 653
        warn "\t##########################################################\n";
        warn "\t# Too many jobs in analysis '".$failed_analysis->logic_name."' FAILED #\n";
        warn "\t##########################################################\n\n";
654 655 656 657
        if($filter_analysis and ($filter_analysis->dbID == $failed_analysis)) {
            $filter_analysis_failed = 1;
        }
    }
658

659
    return $filter_analysis ? $filter_analysis_failed : scalar(@$failed_analyses);
660 661 662
}


663
sub get_hive_current_load {
664 665 666
    my $self = shift;
    my $sql = qq{
        SELECT sum(1/hive_capacity)
667 668 669
        FROM worker w
        JOIN analysis_stats USING(analysis_id)
        WHERE w.status!='DEAD'
670
        AND hive_capacity IS NOT NULL
671
        AND hive_capacity>0
672 673 674 675 676 677
    };
    my $sth = $self->prepare($sql);
    $sth->execute();
    my ($load)=$sth->fetchrow_array();
    $sth->finish;
    return ($load || 0);
678 679 680
}


681 682 683
sub count_running_workers {
    my ($self, $analysis_id) = @_;

684 685
    return $self->count_all( "status!='DEAD'".($analysis_id ? " AND analysis_id=$analysis_id" : '') );
}
686 687


688 689 690 691
sub get_workers_rank {
    my ($self, $worker) = @_;

    return $self->count_all( "status!='DEAD' AND analysis_id=".$worker->analysis_id." AND worker_id<".$worker->dbID );
692 693
}

694

Leo Gordon's avatar
Leo Gordon committed
695
sub get_remaining_jobs_show_hive_progress {
696
  my $self = shift;
Leo Gordon's avatar
Leo Gordon committed
697
  my $sql = "SELECT sum(done_job_count), sum(failed_job_count), sum(total_job_count), ".
698
            "sum(ready_job_count * analysis_stats.avg_msec_per_job)/1000/60/60 ".
699
            "FROM analysis_stats";
700 701
  my $sth = $self->prepare($sql);
  $sth->execute();
702
  my ($done, $failed, $total, $cpuhrs) = $sth->fetchrow_array();
703
  $sth->finish;
704 705 706 707 708 709 710

  $done   ||= 0;
  $failed ||= 0;
  $total  ||= 0;
  my $completed = $total
    ? ((100.0 * ($done+$failed))/$total)
    : 0.0;
711
  my $remaining = $total - $done - $failed;
712
  warn sprintf("hive %1.3f%% complete (< %1.3f CPU_hrs) (%d todo + %d done + %d failed = %d total)\n",
713
          $completed, $cpuhrs, $remaining, $done, $failed, $total);
Leo Gordon's avatar
Leo Gordon committed
714
  return $remaining;
715 716
}

717

Leo Gordon's avatar
Leo Gordon committed
718
sub print_analysis_status {
Leo Gordon's avatar
Leo Gordon committed
719
    my ($self, $filter_analysis) = @_;
720

Leo Gordon's avatar
Leo Gordon committed
721
    my $list_of_analyses = $filter_analysis ? [$filter_analysis] : $self->db->get_AnalysisAdaptor->fetch_all;
Leo Gordon's avatar
Leo Gordon committed
722
    foreach my $analysis (sort {$a->dbID <=> $b->dbID} @$list_of_analyses) {
723
        print $analysis->stats->toString . "\n";
Leo Gordon's avatar
Leo Gordon committed
724
    }
725 726 727
}


Leo Gordon's avatar
Leo Gordon committed
728
sub print_running_worker_counts {
729
    my $self = shift;
730

731 732 733
    my $sql = qq{
        SELECT logic_name, count(*)
        FROM worker w
734
        JOIN analysis_base a USING(analysis_id)
735
        WHERE w.status!='DEAD'
736
        GROUP BY a.analysis_id
737
    };
Leo Gordon's avatar
Leo Gordon committed
738

739 740 741 742 743 744 745 746 747 748 749
    my $total_workers = 0;
    my $sth = $self->prepare($sql);
    $sth->execute();

    print "\n===== Stats of live Workers according to the Queen: ======\n";
    while((my $logic_name, my $worker_count)=$sth->fetchrow_array()) {
        printf("%30s : %d workers\n", $logic_name, $worker_count);
        $total_workers += $worker_count;
    }
    $sth->finish;
    printf("%30s : %d workers\n\n", '======= TOTAL =======', $total_workers);
750
}
751

752

753 754 755 756 757 758 759 760 761 762 763
=head2 monitor

  Arg[1]     : --none--
  Example    : $queen->monitor();
  Description: Monitors current throughput and store the result in the monitor
               table
  Exceptions : none
  Caller     : beekeepers and other external processes

=cut

Leo Gordon's avatar
Leo Gordon committed
764
sub monitor {
765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781
    my $self = shift;
    my $sql = qq{
        INSERT INTO monitor
        SELECT CURRENT_TIMESTAMP, count(*),
    } . {
        'mysql'     =>  qq{ sum(work_done/(UNIX_TIMESTAMP()-UNIX_TIMESTAMP(born))),
                            sum(work_done/(UNIX_TIMESTAMP()-UNIX_TIMESTAMP(born)))/count(*), },
        'sqlite'    =>  qq{ sum(work_done/(strftime('%s','now')-strftime('%s',born))),
                            sum(work_done/(strftime('%s','now')-strftime('%s',born)))/count(*), },
        'pgsql'     =>  qq{ sum(work_done/(EXTRACT(EPOCH FROM CURRENT_TIMESTAMP - born))),
                            sum(work_done/(EXTRACT(EPOCH FROM CURRENT_TIMESTAMP - born)))/count(*), },
    }->{ $self->dbc->driver }. qq{
        group_concat(DISTINCT logic_name)
        FROM worker w
        LEFT JOIN analysis_base USING (analysis_id)
        WHERE w.status!='DEAD'
    };
782 783 784 785 786
      
  my $sth = $self->prepare($sql);
  $sth->execute();
}

787

Leo Gordon's avatar
Leo Gordon committed
788 789 790 791 792 793 794 795 796 797 798 799
=head2 register_all_workers_dead

  Example    : $queen->register_all_workers_dead();
  Description: Registers all workers dead
  Exceptions : none
  Caller     : beekeepers and other external processes

=cut

sub register_all_workers_dead {
    my $self = shift;

800 801
    my $all_workers_considered_alive = $self->fetch_all( "status!='DEAD'" );
    foreach my $worker (@{$all_workers_considered_alive}) {
802
        $worker->cause_of_death( 'UNKNOWN' );  # well, maybe we could have investigated further...
Leo Gordon's avatar
Leo Gordon committed
803 804 805 806
        $self->register_worker_death($worker);
    }
}

807

Jessica Severin's avatar
Jessica Severin committed
808
1;