Queen.pm 29.7 KB
Newer Older
Jessica Severin's avatar
Jessica Severin committed
1 2 3
=pod 

=head1 NAME
4

5 6 7
    Bio::EnsEMBL::Hive::Queen

=head1 DESCRIPTION
Jessica Severin's avatar
Jessica Severin committed
8

9 10 11 12
    The Queen of the Hive based job control system is responsible to 'birthing' the
    correct number of workers of the right type so that they can find jobs to do.
    It will also free up jobs of Workers that died unexpectantly so that other workers
    can claim them to do.
13

14 15 16 17
    Hive based processing is a concept based on a more controlled version
    of an autonomous agent type system.  Each worker is not told what to do
    (like a centralized control system - like the current pipeline system)
    but rather queries a central database for jobs (give me jobs).
Jessica Severin's avatar
Jessica Severin committed
18

19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
    Each worker is linked to an analysis_id, registers its self on creation
    into the Hive, creates a RunnableDB instance of the Analysis->module,
    gets $analysis->stats->batch_size jobs from the job table, does its work,
    creates the next layer of job entries by interfacing to
    the DataflowRuleAdaptor to determine the analyses it needs to pass its
    output data to and creates jobs on the next analysis database.
    It repeats this cycle until it has lived its lifetime or until there are no
    more jobs left.
    The lifetime limit is just a safety limit to prevent these from 'infecting'
    a system.

    The Queens job is to simply birth Workers of the correct analysis_id to get the
    work down.  The only other thing the Queen does is free up jobs that were
    claimed by Workers that died unexpectantly so that other workers can take
    over the work.
34

35 36 37 38 39 40 41 42
    The Beekeeper is in charge of interfacing between the Queen and a compute resource
    or 'compute farm'.  Its job is to query Queens if they need any workers and to
    send the requested number of workers to open machines via the runWorker.pl script.
    It is also responsible for interfacing with the Queen to identify worker which died
    unexpectantly.

=head1 LICENSE

43
    Copyright [1999-2015] Wellcome Trust Sanger Institute and the EMBL-European Bioinformatics Institute
44 45 46 47 48 49 50 51 52

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
    You may obtain a copy of the License at

         http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License
    is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and limitations under the License.
Jessica Severin's avatar
Jessica Severin committed
53 54

=head1 CONTACT
55

56
    Please subscribe to the Hive mailing list:  http://listserver.ebi.ac.uk/mailman/listinfo/ehive-users  to discuss Hive-related questions or to be notified of our updates
Jessica Severin's avatar
Jessica Severin committed
57 58

=head1 APPENDIX
59

60 61
    The rest of the documentation details each of the object methods. 
    Internal methods are usually preceded with a _
Jessica Severin's avatar
Jessica Severin committed
62 63 64

=cut

65

Jessica Severin's avatar
Jessica Severin committed
66 67 68
package Bio::EnsEMBL::Hive::Queen;

use strict;
69
use warnings;
70
use File::Path 'make_path';
71
use List::Util qw(max);
72

73
use Bio::EnsEMBL::Hive::Utils ('destringify', 'dir_revhash');  # NB: needed by invisible code
74
use Bio::EnsEMBL::Hive::AnalysisJob;
75
use Bio::EnsEMBL::Hive::Role;
76
use Bio::EnsEMBL::Hive::Scheduler;
77
use Bio::EnsEMBL::Hive::Worker;
Jessica Severin's avatar
Jessica Severin committed
78

79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
use base ('Bio::EnsEMBL::Hive::DBSQL::ObjectAdaptor');


sub default_table_name {
    return 'worker';
}


sub default_insertion_method {
    return 'INSERT';
}


sub object_class {
    return 'Bio::EnsEMBL::Hive::Worker';
}
95

Jessica Severin's avatar
Jessica Severin committed
96

97
############################
Jessica Severin's avatar
Jessica Severin committed
98
#
99
# PUBLIC API
Jessica Severin's avatar
Jessica Severin committed
100
#
101
############################
Jessica Severin's avatar
Jessica Severin committed
102

103

Jessica Severin's avatar
Jessica Severin committed
104 105
=head2 create_new_worker

106 107 108 109
  Description: Creates an entry in the worker table,
               populates some non-storable attributes
               and returns a Worker object based on that insert.
               This guarantees that each worker registered in this Queen's hive is properly registered.
Jessica Severin's avatar
Jessica Severin committed
110
  Returntype : Bio::EnsEMBL::Hive::Worker
111
  Caller     : runWorker.pl
Jessica Severin's avatar
Jessica Severin committed
112 113 114 115

=cut

sub create_new_worker {
116 117
    my $self    = shift @_;
    my %flags   = @_;
118

119
    my ($meadow_type, $meadow_name, $process_id, $meadow_host, $meadow_user, $resource_class_id, $resource_class_name,
120
        $no_write, $debug, $worker_log_dir, $hive_log_dir, $job_limit, $life_span, $no_cleanup, $retry_throwing_jobs, $can_respecialize)
121
     = @flags{qw(-meadow_type -meadow_name -process_id -meadow_host -meadow_user -resource_class_id -resource_class_name
122
            -no_write -debug -worker_log_dir -hive_log_dir -job_limit -life_span -no_cleanup -retry_throwing_jobs -can_respecialize)};
123

124 125 126 127 128 129 130 131 132 133 134 135
    foreach my $prev_worker_incarnation (@{ $self->fetch_all( "status!='DEAD' AND meadow_type='$meadow_type' AND meadow_name='$meadow_name' AND process_id='$process_id'" ) }) {
            # so far 'RELOCATED events' has been detected on LSF 9.0 in response to sending signal #99 or #100
            # Since I don't know how to avoid them, I am trying to register them when they happen.
            # The following snippet buries the previous incarnation of the Worker before starting a new one.
            #
            # FIXME: if GarabageCollector (beekeeper -dead) gets to these processes first, it will register them as DEAD/UNKNOWN.
            #       LSF 9.0 does not report "rescheduling" events in the output of 'bacct', but does mention them in 'bhist'.
            #       So parsing 'bhist' output would probably yield the most accurate & confident registration of these events.
        $prev_worker_incarnation->cause_of_death( 'RELOCATED' );
        $self->register_worker_death( $prev_worker_incarnation );
    }

136 137
    my $resource_class;

138
    if( defined($resource_class_name) ) {
139
        $resource_class = $self->db->get_ResourceClassAdaptor->fetch_by_name($resource_class_name)
140
            or die "resource_class with name='$resource_class_name' could not be fetched from the database";
141 142 143
    } elsif( defined($resource_class_id) ) {
        $resource_class = $self->db->get_ResourceClassAdaptor->fetch_by_dbID($resource_class_id)
            or die "resource_class with dbID='$resource_class_id' could not be fetched from the database";
144 145
    }

146 147 148
    my $worker = Bio::EnsEMBL::Hive::Worker->new(
        'meadow_type'       => $meadow_type,
        'meadow_name'       => $meadow_name,
149 150
        'meadow_host'       => $meadow_host,
        'meadow_user'       => $meadow_user,
151 152 153 154 155
        'process_id'        => $process_id,
        'resource_class'    => $resource_class,
    );
    $self->store( $worker );
    my $worker_id = $worker->dbID;
156

157
    $worker = $self->fetch_by_dbID( $worker_id )    # refresh the object to get the fields initialized at SQL level (timestamps in this case)
158 159
        or die "Could not fetch worker with dbID=$worker_id";

160 161 162 163
    if($hive_log_dir or $worker_log_dir) {
        my $dir_revhash = dir_revhash($worker_id);
        $worker_log_dir ||= $hive_log_dir .'/'. ($dir_revhash ? "$dir_revhash/" : '') .'worker_id_'.$worker_id;

164 165
        eval {
            make_path( $worker_log_dir );
166 167
            1;
        } or die "Could not create '$worker_log_dir' directory : $@";
168

169 170
        $worker->log_dir( $worker_log_dir );
        $self->update_log_dir( $worker );   # autoloaded
171 172 173 174
    }

    $worker->init;

175 176
    if(defined($job_limit)) {
      $worker->job_limiter($job_limit);
177 178 179 180 181 182 183 184 185 186 187 188 189
      $worker->life_span(0);
    }

    $worker->life_span($life_span * 60)                 if($life_span);

    $worker->execute_writes(0)                          if($no_write);

    $worker->perform_cleanup(0)                         if($no_cleanup);

    $worker->debug($debug)                              if($debug);

    $worker->retry_throwing_jobs($retry_throwing_jobs)  if(defined $retry_throwing_jobs);

190 191
    $worker->can_respecialize($can_respecialize)        if(defined $can_respecialize);

192 193 194 195
    return $worker;
}


196
=head2 specialize_worker
197

198 199
  Description: If analysis_id or logic_name is specified it will try to specialize the Worker into this analysis.
               If not specified the Queen will analyze the hive and pick the most suitable analysis.
200
  Caller     : Bio::EnsEMBL::Hive::Worker
201 202 203

=cut

204
sub specialize_worker {
205 206
    my $self    = shift @_;
    my $worker  = shift @_;
207
    my $flags   = shift @_;
208

209 210
    my ($analyses_pattern, $job_id, $force)
     = @$flags{qw(-analyses_pattern -job_id -force)};
211

212 213
    if( $analyses_pattern and $job_id ) {
        die "At most one of the options {-analyses_pattern, -job_id} can be set to pre-specialize a Worker";
Leo Gordon's avatar
Leo Gordon committed
214 215
    }

216
    my $analysis;
217

218
    if( $job_id ) {
219

220
        warn "resetting and fetching job for job_id '$job_id'\n";
221

222
        my $job_adaptor = $self->db->get_AnalysisJobAdaptor;
223

224 225 226
        my $job = $job_adaptor->fetch_by_dbID( $job_id )
            or die "Could not fetch job with dbID='$job_id'";
        my $job_status = $job->status();
227

228 229 230 231
        if($job_status =~/(CLAIMED|PRE_CLEANUP|FETCH_INPUT|RUN|WRITE_OUTPUT|POST_CLEANUP)/ ) {
            die "Job with dbID='$job_id' is already in progress, cannot run";   # FIXME: try GC first, then complain
        } elsif($job_status =~/(DONE|SEMAPHORED)/ and !$force) {
            die "Job with dbID='$job_id' is $job_status, please use -force 1 to override";
232 233
        }

234 235 236
        if(($job_status eq 'DONE') and $job->semaphored_job_id) {
            warn "Increasing the semaphore count of the dependent job";
            $job_adaptor->increase_semaphore_count_for_jobid( $job->semaphored_job_id );
237 238
        }

239 240
        $analysis = $job->analysis;

241
    } else {
242 243 244 245 246 247 248 249 250
        $analysis = Bio::EnsEMBL::Hive::Scheduler::suggest_analysis_to_specialize_a_worker($worker, $analyses_pattern);

        unless( ref($analysis) ) {

            $worker->cause_of_death('NO_ROLE');

            my $msg = $analysis // "No analysis suitable for the worker was found";
            die "$msg\n";
        }
251
    }
252

253 254
    my $new_role = Bio::EnsEMBL::Hive::Role->new(
        'worker'        => $worker,
255
        'analysis'      => $analysis,
256
    );
257
    $self->db->get_RoleAdaptor->store( $new_role );
258 259
    $worker->current_role( $new_role );

260 261
    my $analysis_stats_adaptor = $self->db->get_AnalysisStatsAdaptor;

262 263 264 265 266 267 268 269 270
    if($job_id) {
        my $role_id = $new_role->dbID;
        if( my $job = $self->db->get_AnalysisJobAdaptor->reset_or_grab_job_by_dbID($job_id, $role_id) ) {

            $worker->special_batch( [ $job ] );
        } else {
            die "Could not claim job with dbID='$job_id' for Role with dbID='$role_id'";
        }

271
    } else {    # count it as autonomous worker sharing the load of that analysis:
272

273
        $analysis_stats_adaptor->update_status($analysis->dbID, 'WORKING');
274 275 276 277 278 279 280 281 282
    }

        # The following increment used to be done only when no specific task was given to the worker,
        # thereby excluding such "special task" workers from being counted in num_running_workers.
        #
        # However this may be tricky to emulate by triggers that know nothing about "special tasks",
        # so I am (temporarily?) simplifying the accounting algorithm.
        #
    unless( $self->db->hive_use_triggers() ) {
283
        $analysis_stats_adaptor->increase_running_workers( $analysis->dbID );
284
    }
Jessica Severin's avatar
Jessica Severin committed
285 286
}

287

Jessica Severin's avatar
Jessica Severin committed
288
sub register_worker_death {
289
    my ($self, $worker, $update_when_checked_in) = @_;
290

291 292 293
    my $worker_id       = $worker->dbID;
    my $work_done       = $worker->work_done;
    my $cause_of_death  = $worker->cause_of_death || 'UNKNOWN';    # make sure we do not attempt to insert a void
294
    my $worker_died     = $worker->when_died;
295

296 297 298
    my $current_role    = $worker->current_role;

    unless( $current_role ) {
299
        $worker->current_role( $current_role = $self->db->get_RoleAdaptor->fetch_last_unfinished_by_worker_id( $worker_id ) );
300 301
    }

302
    if( $current_role and !$current_role->when_finished() ) {
303
        # List of cause_of_death:
304 305 306
        # only happen before or after a batch: 'NO_ROLE','NO_WORK','JOB_LIMIT','HIVE_OVERLOAD','LIFESPAN','SEE_MSG'
        # can happen whilst the worker is running a batch: 'CONTAMINATED','RELOCATED','KILLED_BY_USER','MEMLIMIT','RUNLIMIT','SEE_MSG','UNKNOWN'
        my $release_undone_jobs = ($cause_of_death =~ /^(CONTAMINATED|RELOCATED|KILLED_BY_USER|MEMLIMIT|RUNLIMIT|SEE_MSG|UNKNOWN)$/);
307
        $current_role->worker($worker); # So that release_undone_jobs_from_role() has the correct cause_of_death and work_done
308
        $current_role->when_finished( $worker_died );
309
        $self->db->get_RoleAdaptor->finalize_role( $current_role, $release_undone_jobs );
310
    }
311 312

    my $sql = "UPDATE worker SET status='DEAD', work_done='$work_done', cause_of_death='$cause_of_death'"
313 314
            . ( $update_when_checked_in ? ', when_checked_in=CURRENT_TIMESTAMP ' : '' )
            . ( $worker_died ? ", when_died='$worker_died'" : ', when_died=CURRENT_TIMESTAMP' )
315 316
            . " WHERE worker_id='$worker_id' ";

317 318 319
    $self->dbc->protected_prepare_execute( [ $sql ],
        sub { my ($after) = @_; $self->db->get_LogMessageAdaptor->store_worker_message( $worker, "register_worker_death".$after, 0 ); }
    );
Jessica Severin's avatar
Jessica Severin committed
320 321
}

322

323
sub meadow_type_2_name_2_users_of_running_workers {
324 325
    my $self = shift @_;

326
    return $self->count_all("status!='DEAD'", ['meadow_type', 'meadow_name', 'meadow_user']);
327 328 329
}


330
sub check_for_dead_workers {    # scans the whole Valley for lost Workers (but ignores unreachable ones)
331
    my ($self, $valley, $check_buried_in_haste) = @_;
Leo Gordon's avatar
Leo Gordon committed
332

333 334
    my $last_few_seconds            = 5;    # FIXME: It is probably a good idea to expose this parameter for easier tuning.

335
    warn "GarbageCollector:\tChecking for lost Workers...\n";
336

337 338 339 340 341 342 343 344 345 346 347
    my $meadow_type_2_name_2_users      = $self->meadow_type_2_name_2_users_of_running_workers();
    my %signature_and_pid_to_worker_status = ();

    while(my ($meadow_type, $level2) = each %$meadow_type_2_name_2_users) {

        if(my $meadow = $valley->available_meadow_hash->{$meadow_type}) {   # if this Valley supports $meadow_type at all...
            while(my ($meadow_name, $level3) = each %$level2) {

                if($meadow->cached_name eq $meadow_name) {  # and we can reach the same $meadow_name from this Valley...
                    my $meadow_users_of_interest    = [ keys %$level3 ];
                    my $meadow_signature            = $meadow_type.'/'.$meadow_name;
Leo Gordon's avatar
Leo Gordon committed
348

349 350 351 352 353 354 355
                    $signature_and_pid_to_worker_status{$meadow_signature} ||= $meadow->status_of_all_our_workers( $meadow_users_of_interest );
                }
            }
        }
    }

    my $queen_overdue_workers       = $self->fetch_overdue_workers( $last_few_seconds );    # check the workers we have not seen active during the $last_few_seconds
356
    warn "GarbageCollector:\t[Queen:] out of ".scalar(@$queen_overdue_workers)." Workers that haven't checked in during the last $last_few_seconds seconds...\n";
357

358 359 360
    my $update_when_seen_sql = "UPDATE worker SET when_seen=CURRENT_TIMESTAMP WHERE worker_id=?";
    my $update_when_seen_sth;

361 362
    my %meadow_status_counts        = ();
    my %mt_and_pid_to_lost_worker   = ();
363
    foreach my $worker (@$queen_overdue_workers) {
Leo Gordon's avatar
Leo Gordon committed
364

365 366
        my $meadow_signature    = $worker->meadow_type.'/'.$worker->meadow_name;
        if(my $pid_to_worker_status = $signature_and_pid_to_worker_status{$meadow_signature}) {   # the whole Meadow subhash is either present or the Meadow is unreachable
Leo Gordon's avatar
Leo Gordon committed
367

368 369 370 371
            my $meadow_type = $worker->meadow_type;
            my $process_id  = $worker->process_id;
            if(my $status = $pid_to_worker_status->{$process_id}) {  # can be RUN|PEND|xSUSP
                $meadow_status_counts{$meadow_signature}{$status}++;
372 373 374 375 376

                    # only prepare once at most:
                $update_when_seen_sth ||= $self->prepare( $update_when_seen_sql );

                $update_when_seen_sth->execute( $worker->dbID );
377
            } else {
378
                $meadow_status_counts{$meadow_signature}{'LOST'}++;
379

380 381
                $mt_and_pid_to_lost_worker{$meadow_type}{$process_id} = $worker;
            }
382
        } else {
383
            $meadow_status_counts{$meadow_signature}{'UNREACHABLE'}++;   # Worker is unreachable from this Valley
384
        }
385 386
    }

387 388
    $update_when_seen_sth->finish() if $update_when_seen_sth;

389
        # print a quick summary report:
390 391
    while(my ($meadow_signature, $status_count) = each %meadow_status_counts) {
        warn "GarbageCollector:\t[$meadow_signature Meadow:]\t".join(', ', map { "$_:$status_count->{$_}" } keys %$status_count )."\n\n";
392
    }
393

394 395 396 397 398 399
    while(my ($meadow_type, $pid_to_lost_worker) = each %mt_and_pid_to_lost_worker) {
        my $this_meadow = $valley->available_meadow_hash->{$meadow_type};

        if(my $lost_this_meadow = scalar(keys %$pid_to_lost_worker) ) {
            warn "GarbageCollector:\tDiscovered $lost_this_meadow lost $meadow_type Workers\n";

400 401
            my $report_entries = {};

402
            if($this_meadow->can('find_out_causes')) {
403 404 405 406 407
                die "Your Meadow::$meadow_type driver now has to support get_report_entries_for_process_ids() method instead of find_out_causes(). Please update it.\n";

            } elsif($this_meadow->can('get_report_entries_for_process_ids')) {
                $report_entries = $this_meadow->get_report_entries_for_process_ids( keys %$pid_to_lost_worker );
                my $lost_with_known_cod = scalar( grep { $_->{'cause_of_death'} } values %$report_entries);
408 409 410 411 412 413 414
                warn "GarbageCollector:\tFound why $lost_with_known_cod of $meadow_type Workers died\n";
            } else {
                warn "GarbageCollector:\t$meadow_type meadow does not support post-mortem examination\n";
            }

            warn "GarbageCollector:\tReleasing the jobs\n";
            while(my ($process_id, $worker) = each %$pid_to_lost_worker) {
415
                $worker->when_died(         $report_entries->{$process_id}{'when_died'} );
416 417
                $worker->cause_of_death(    $report_entries->{$process_id}{'cause_of_death'} );
                $self->register_worker_death( $worker );
418
            }
419 420 421 422 423

            if( %$report_entries ) {    # use the opportunity to also store resource usage of the buried workers:
                my $processid_2_workerid = { map { $_ => $pid_to_lost_worker->{$_}->dbID } keys %$pid_to_lost_worker };
                $self->store_resource_usage( $report_entries, $processid_2_workerid );
            }
424 425 426
        }
    }

427
        # the following bit is completely Meadow-agnostic and only restores database integrity:
Leo Gordon's avatar
Leo Gordon committed
428
    if($check_buried_in_haste) {
429 430
        warn "GarbageCollector:\tChecking for Workers/Roles buried in haste...\n";
        my $buried_in_haste_list = $self->db->get_RoleAdaptor->fetch_all_finished_roles_with_unfinished_jobs();
Leo Gordon's avatar
Leo Gordon committed
431
        if(my $bih_number = scalar(@$buried_in_haste_list)) {
432
            warn "GarbageCollector:\tfound $bih_number jobs, reclaiming.\n\n";
Leo Gordon's avatar
Leo Gordon committed
433
            if($bih_number) {
434 435 436
                my $job_adaptor = $self->db->get_AnalysisJobAdaptor;
                foreach my $role (@$buried_in_haste_list) {
                    $job_adaptor->release_undone_jobs_from_role( $role );
Leo Gordon's avatar
Leo Gordon committed
437 438 439
                }
            }
        } else {
440
            warn "GarbageCollector:\tfound none\n";
Leo Gordon's avatar
Leo Gordon committed
441 442 443
        }
    }
}
Jessica Severin's avatar
Jessica Severin committed
444 445


446 447 448
    # a new version that both checks in and updates the status
sub check_in_worker {
    my ($self, $worker) = @_;
Jessica Severin's avatar
Jessica Severin committed
449

450 451 452 453 454
    my $sql = "UPDATE worker SET when_checked_in=CURRENT_TIMESTAMP, status='".$worker->status."', work_done='".$worker->work_done."' WHERE worker_id='".$worker->dbID."'";

    $self->dbc->protected_prepare_execute( [ $sql ],
        sub { my ($after) = @_; $self->db->get_LogMessageAdaptor->store_worker_message( $worker, "check_in_worker".$after, 0 ); }
    );
Jessica Severin's avatar
Jessica Severin committed
455 456 457
}


458
=head2 reset_job_by_dbID_and_sync
459

460
  Arg [1]: int $job_id
461
  Example: 
462
    my $job = $queen->reset_job_by_dbID_and_sync($job_id);
463
  Description: 
464
    For the specified job_id it will fetch just that job, 
465 466 467
    reset it completely as if it has never run, and return it.  
    Specifying a specific job bypasses the safety checks, 
    thus multiple workers could be running the 
468
    same job simultaneously (use only for debugging).
469
  Returntype : none
470
  Exceptions :
471
  Caller     : beekeeper.pl
472 473 474

=cut

475 476
sub reset_job_by_dbID_and_sync {
    my ($self, $job_id) = @_;
477

478 479 480
    my $job     = $self->db->get_AnalysisJobAdaptor->reset_or_grab_job_by_dbID($job_id);

    my $stats   = $job->analysis->stats;
481 482

    $self->synchronize_AnalysisStats($stats);
483 484 485
}


486 487 488 489 490 491
######################################
#
# Public API interface for beekeeper
#
######################################

492

493 494 495
    # Note: asking for Queen->fetch_overdue_workers(0) essentially means
    #       "fetch all workers known to the Queen not to be officially dead"
    #
496
sub fetch_overdue_workers {
497
    my ($self,$overdue_secs) = @_;
498

499
    $overdue_secs = 3600 unless(defined($overdue_secs));
500

501
    my $constraint = "status!='DEAD' AND ".{
502 503 504
            'mysql'     =>  "(UNIX_TIMESTAMP()-UNIX_TIMESTAMP(when_checked_in)) > $overdue_secs",
            'sqlite'    =>  "(strftime('%s','now')-strftime('%s',when_checked_in)) > $overdue_secs",
            'pgsql'     =>  "EXTRACT(EPOCH FROM CURRENT_TIMESTAMP - when_checked_in) > $overdue_secs",
505 506
        }->{ $self->dbc->driver };

507
    return $self->fetch_all( $constraint );
508 509
}

510

511 512
=head2 synchronize_hive

513 514 515 516 517
  Arg [1]    : $list_of_analyses
  Example    : $queen->synchronize_hive( [ $analysis_A, $analysis_B ] );
  Description: Runs through all analyses in the given list and synchronizes
              the analysis_stats summary with the states in the job and worker tables.
              Then follows by checking all the blocking rules and blocks/unblocks analyses as needed.
518 519 520 521 522 523
  Exceptions : none
  Caller     : general

=cut

sub synchronize_hive {
524
    my ($self, $list_of_analyses) = @_;
525

526
    my $start_time = time();
527

528 529 530 531 532 533
    print STDERR "\nSynchronizing the hive (".scalar(@$list_of_analyses)." analyses this time):\n";
    foreach my $analysis (@$list_of_analyses) {
        $self->synchronize_AnalysisStats($analysis->stats);
        print STDERR ( ($analysis->stats()->status eq 'BLOCKED') ? 'x' : 'o');
    }
    print STDERR "\n";
Leo Gordon's avatar
Leo Gordon committed
534

535
    print STDERR ''.((time() - $start_time))." seconds to synchronize_hive\n\n";
536
}
537

538

539 540 541
=head2 safe_synchronize_AnalysisStats

  Arg [1]    : Bio::EnsEMBL::Hive::AnalysisStats object
542
  Example    : $self->safe_synchronize_AnalysisStats($stats);
543 544 545
  Description: Prewrapper around synchronize_AnalysisStats that does
               checks and grabs sync_lock before proceeding with sync.
               Used by distributed worker sync system to avoid contention.
546 547 548
               Returns 1 on success and 0 if the lock could not have been obtained,
               and so no sync was attempted.
  Returntype : boolean
549 550 551 552 553
  Caller     : general

=cut

sub safe_synchronize_AnalysisStats {
554 555 556 557
    my ($self, $stats) = @_;

    my $max_refresh_attempts = 5;
    while($stats->sync_lock and $max_refresh_attempts--) {   # another Worker/Beekeeper is synching this analysis right now
558
            # ToDo: it would be nice to report the detected collision
559 560 561 562
        sleep(1);
        $stats->refresh();  # just try to avoid collision
    }

563
    unless( ($stats->status eq 'DONE')
564
         or ( ($stats->status eq 'WORKING') and defined($stats->seconds_since_when_updated) and ($stats->seconds_since_when_updated < 3*60) ) ) {
565 566 567 568 569

        my $sql = "UPDATE analysis_stats SET status='SYNCHING', sync_lock=1 ".
                  "WHERE sync_lock=0 and analysis_id=" . $stats->analysis_id;

        my $row_count = $self->dbc->do($sql);   # try to claim the sync_lock
570

571 572
        if( $row_count == 1 ) {     # if we managed to obtain the lock, let's go and perform the sync:
            $self->synchronize_AnalysisStats($stats);   
573
            return 1;
574 575
        } # otherwise assume it's locked and just return un-updated
    }
576 577

    return 0;
578 579 580
}


581
=head2 synchronize_AnalysisStats
582

583
  Arg [1]    : Bio::EnsEMBL::Hive::AnalysisStats object
584
  Example    : $self->synchronize_AnalysisStats( $stats );
585
  Description: Queries the job and worker tables to get summary counts
586 587
               and rebuilds the AnalysisStats object.
               Then updates the analysis_stats table with the new summary info.
588 589
  Exceptions : none
  Caller     : general
590

591
=cut
592

593
sub synchronize_AnalysisStats {
594
    my ($self, $stats) = @_;
595

596
    if( $stats and $stats->analysis_id ) {
597

598
        $stats->refresh(); ## Need to get the new hive_capacity for dynamic analyses
599

600
        my $job_counts = $self->db->hive_use_triggers() ? undef : $self->db->get_AnalysisJobAdaptor->fetch_job_counts_hashed_by_status( $stats->analysis_id );
601

602
        $stats->recalculate_from_job_counts( $job_counts );
603

604 605 606
        # $stats->sync_lock(0); ## do we perhaps need it here?
        $stats->update;  #update and release sync_lock
    }
607
}
608 609


610 611 612 613 614 615 616
=head2 check_nothing_to_run_but_semaphored

  Arg [1]    : $list_of_analyses
  Example    : $self->check_nothing_to_run_but_semaphored( [ $analysis_A, $analysis_B ] );
  Description: Counts the number of immediately runnable jobs in the given analyses.
  Exceptions : none
  Caller     : Scheduler
617

618 619 620 621
=cut

sub check_nothing_to_run_but_semaphored {   # make sure it is run after a recent sync
    my ($self, $list_of_analyses) = @_;
622 623 624 625 626 627 628

    my $only_semaphored_jobs_to_run = 1;
    my $total_semaphored_job_count  = 0;

    foreach my $analysis (@$list_of_analyses) {
        my $stats = $analysis->stats;

629
        $only_semaphored_jobs_to_run = 0 if( $stats->total_job_count != $stats->done_job_count + $stats->failed_job_count + $stats->semaphored_job_count );
630 631 632 633 634 635 636
        $total_semaphored_job_count += $stats->semaphored_job_count;
    }

    return ( $total_semaphored_job_count && $only_semaphored_jobs_to_run );
}


637
=head2 print_status_and_return_reasons_to_exit
638

639
  Arg [1]    : $list_of_analyses
640
  Example    : my $reasons_to_exit = $queen->print_status_and_return_reasons_to_exit( [ $analysis_A, $analysis_B ] );
641 642
  Description: Runs through all analyses in the given list, reports failed analyses, computes some totals, prints a combined status line
                and returns a pair of ($failed_analyses_counter, $total_jobs_to_do)
643
  Exceptions : none
644
  Caller     : beekeeper.pl
645 646 647

=cut

648
sub print_status_and_return_reasons_to_exit {
649
    my ($self, $list_of_analyses) = @_;
650

651 652
    my ($total_done_jobs, $total_failed_jobs, $total_jobs, $cpumsec_to_do) = (0) x 4;
    my $reasons_to_exit = '';
653

654 655
    my $max_logic_name_length = max(map {length($_->logic_name)} @$list_of_analyses);

656
    foreach my $analysis (sort {$a->dbID <=> $b->dbID} @$list_of_analyses) {
657 658 659
        my $stats               = $analysis->stats;
        my $failed_job_count    = $stats->failed_job_count;

660
        print $stats->toString($max_logic_name_length) . "\n";
661

662
        if( $stats->status eq 'FAILED') {
663 664 665
            my $logic_name    = $analysis->logic_name;
            my $tolerance     = $analysis->failed_job_tolerance;
            $reasons_to_exit .= "### Analysis '$logic_name' has FAILED  (failed Jobs: $failed_job_count, tolerance: $tolerance\%) ###\n";
666
        }
667

668 669 670 671
        $total_done_jobs    += $stats->done_job_count;
        $total_failed_jobs  += $failed_job_count;
        $total_jobs         += $stats->total_job_count;
        $cpumsec_to_do      += $stats->ready_job_count * $stats->avg_msec_per_job;
672 673
    }

674
    my $total_jobs_to_do        = $total_jobs - $total_done_jobs - $total_failed_jobs;         # includes SEMAPHORED, READY, CLAIMED, INPROGRESS
675
    my $cpuhrs_to_do            = $cpumsec_to_do / (1000.0*60*60);
676 677
    my $percentage_completed    = $total_jobs
                                    ? (($total_done_jobs+$total_failed_jobs)*100.0/$total_jobs)
678 679
                                    : 0.0;

680
    printf("total over %d analyses : %6.2f%% complete (< %.2f CPU_hrs) (%d to_do + %d done + %d failed = %d total)\n",
681
                scalar(@$list_of_analyses), $percentage_completed, $cpuhrs_to_do, $total_jobs_to_do, $total_done_jobs, $total_failed_jobs, $total_jobs);
682

683 684
    unless( $total_jobs_to_do ) {
        $reasons_to_exit .= "### No jobs left to do ###\n";
Leo Gordon's avatar
Leo Gordon committed
685
    }
686 687

    return $reasons_to_exit;
688 689 690
}


Leo Gordon's avatar
Leo Gordon committed
691 692 693 694 695
=head2 register_all_workers_dead

  Example    : $queen->register_all_workers_dead();
  Description: Registers all workers dead
  Exceptions : none
696
  Caller     : beekeeper.pl
Leo Gordon's avatar
Leo Gordon committed
697 698 699 700 701 702

=cut

sub register_all_workers_dead {
    my $self = shift;

703 704
    my $all_workers_considered_alive = $self->fetch_all( "status!='DEAD'" );
    foreach my $worker (@{$all_workers_considered_alive}) {
705
        $self->register_worker_death( $worker );
Leo Gordon's avatar
Leo Gordon committed
706 707 708
    }
}

709

710 711 712 713 714 715
sub interval_workers_with_unknown_usage {
    my $self = shift @_;

    my %meadow_to_interval = ();

    my $sql_times = qq{
716
        SELECT meadow_type, meadow_name, min(when_born), max(when_died), count(*)
717
        FROM worker w
718
        LEFT JOIN worker_resource_usage u USING(worker_id)
719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736
        WHERE u.worker_id IS NULL
        GROUP BY meadow_type, meadow_name
    };
    my $sth_times = $self->prepare( $sql_times );
    $sth_times->execute();
    while( my ($meadow_type, $meadow_name, $min_born, $max_died, $workers_count) = $sth_times->fetchrow_array() ) {
        $meadow_to_interval{$meadow_type}{$meadow_name} = {
            'min_born'      => $min_born,
            'max_died'      => $max_died,
            'workers_count' => $workers_count,
        };
    }
    $sth_times->finish();

    return \%meadow_to_interval;
}


737 738 739
sub store_resource_usage {
    my ($self, $report_entries, $processid_2_workerid) = @_;

740 741 742 743 744 745 746
    # FIXME: An UPSERT would be better here, but it is only promised in PostgreSQL starting from 9.5, which is not officially out yet.

    my $sql_delete = 'DELETE FROM worker_resource_usage WHERE worker_id=?';
    my $sth_delete = $self->prepare( $sql_delete );

    my $sql_insert = 'INSERT INTO worker_resource_usage (worker_id, exit_status, mem_megs, swap_megs, pending_sec, cpu_sec, lifespan_sec, exception_status) VALUES (?, ?, ?, ?, ?, ?, ?, ?)';
    my $sth_insert = $self->prepare( $sql_insert );
747 748 749 750 751 752

    my @not_ours = ();

    while( my ($process_id, $report_entry) = each %$report_entries ) {

        if( my $worker_id = $processid_2_workerid->{$process_id} ) {
753 754
            $sth_delete->execute( $worker_id );
            $sth_insert->execute( $worker_id, @$report_entry{'exit_status', 'mem_megs', 'swap_megs', 'pending_sec', 'cpu_sec', 'lifespan_sec', 'exception_status'} );  # slicing hashref
755 756 757 758 759
        } else {
            push @not_ours, $process_id;
            #warn "\tDiscarding process_id=$process_id as probably not ours because it could not be mapped to a Worker\n";
        }
    }
760 761
    $sth_delete->finish();
    $sth_insert->finish();
762 763 764
}


Jessica Severin's avatar
Jessica Severin committed
765
1;