Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
E
ensembl-hive
Manage
Activity
Members
Labels
Plan
Issues
0
Issue boards
Milestones
Iterations
Wiki
Requirements
Jira
Code
Merge requests
7
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package Registry
Container Registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ensembl-gh-mirror
ensembl-hive
Commits
02313941
Commit
02313941
authored
13 years ago
by
Leo Gordon
Browse files
Options
Downloads
Patches
Plain Diff
documentation and better user interface (dumping and undumping supported)
parent
44f04b93
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
scripts/lsf_report.pl
+131
-65
131 additions, 65 deletions
scripts/lsf_report.pl
with
131 additions
and
65 deletions
scripts/lsf_report.pl
+
131
−
65
View file @
02313941
...
...
@@ -5,99 +5,165 @@
use
strict
;
use
warnings
;
use
Getopt::
Long
;
use
Bio::EnsEMBL::Hive::
URLFactory
;
use
Bio::EnsEMBL::Hive::
Utils
('
script_usage
');
my
(
$url
,
$infile
,
$lsf_user
);
main
();
exit
(
0
);
GetOptions
(
'
url=s
'
=>
\
$url
,
'
infile=s
'
=>
\
$infile
,
'
lsf_user=s
'
=>
\
$lsf_user
,
);
unless
(
$url
)
{
die
"
-url is an obligatory parameter for connecting to your database
";
}
sub
main
{
my
$dba
=
Bio::EnsEMBL::Hive::
URLFactory
->
fetch
(
$url
);
my
$dbc
=
$dba
->
dbc
();
my
(
$url
,
$bacct_source_line
,
$lsf_user
,
$help
);
warn
"
Creating the 'lsf_report' table if it doesn't exist...
\n
";
$dbc
->
do
(
qq{
CREATE TABLE IF NOT EXISTS lsf_report (
process_id varchar(40) NOT NULL,
status varchar(20) NOT NULL,
mem int NOT NULL,
swap int NOT NULL,
exception_status varchar(40) NOT NULL,
GetOptions
(
'
url=s
'
=>
\
$url
,
'
dump|file=s
'
=>
\
$bacct_source_line
,
'
lu|lsf_user=s
'
=>
\
$lsf_user
,
'
h|help
'
=>
\
$help
,
);
PRIMARY KEY (process_id)
if
(
$help
)
{
script_usage
(
0
);
}
) ENGINE=InnoDB;
}
);
unless
(
$url
)
{
print
"
\n
ERROR : --url is an obligatory parameter for connecting to your database
\n\n
";
script_usage
(
1
);
}
if
(
$infile
)
{
my
$dba
=
Bio::EnsEMBL::Hive::
URLFactory
->
fetch
(
$url
)
||
die
"
Unable to connect to pipeline database '
$url
'
\n
";
my
$dbc
=
$dba
->
dbc
();
warn
"
Parsing bacct file '
$infile
'...
\n
";
warn
"
Creating the 'lsf_report' table if it doesn't exist...
\n
";
$dbc
->
do
(
qq{
CREATE TABLE IF NOT EXISTS lsf_report (
process_id varchar(40) NOT NULL,
status varchar(20) NOT NULL,
mem int NOT NULL,
swap int NOT NULL,
exception_status varchar(40) NOT NULL,
}
else
{
PRIMARY KEY (process_id)
warn
"
No bacct file given, finding out the time interval when the pipeline was run...
\n
";
) ENGINE=InnoDB;
}
);
my
$sth_times
=
$dbc
->
prepare
(
'
SELECT min(born), max(died) FROM worker WHERE meadow_type="LSF"
'
);
$sth_times
->
execute
();
my
(
$from_time
,
$to_time
)
=
$sth_times
->
fetchrow_array
();
$sth_times
->
finish
();
if
(
$bacct_source_line
&&
-
r
$bacct_source_line
)
{
$from_time
=~
s/[- ]/\//g
;
$from_time
=~
s/:\d\d$//
;
$to_time
=~
s/[- ]/\//g
;
$to_time
=~
s/:\d\d$//
;
warn
"
Parsing given bacct file '
$bacct_source_line
'...
\n
";
warn
"
\t
from=
$from_time
, to=
$to_time
\n
";
}
else
{
$lsf_user
=
$lsf_user
?
"
-u
$lsf_user
"
:
'';
$infile
=
"
bacct -C
$from_time
,
$to_time
$lsf_user
-l |
";
warn
"
No bacct information given, finding out the time interval when the pipeline was run...
\n
";
warn
"
Will run the following command to obtain bacct information: '
$infile
' (may take a few minutes)
\n
";
}
my
$sth_times
=
$dbc
->
prepare
(
'
SELECT min(born), max(died) FROM worker WHERE meadow_type="LSF"
'
);
$sth_times
->
execute
();
my
(
$from_time
,
$to_time
)
=
$sth_times
->
fetchrow_array
();
$sth_times
->
finish
();
$from_time
=~
s/[- ]/\//g
;
$from_time
=~
s/:\d\d$//
;
$to_time
=~
s/[- ]/\//g
;
$to_time
=~
s/:\d\d$//
;
warn
"
\t
from=
$from_time
, to=
$to_time
\n
";
$lsf_user
=
$lsf_user
?
"
-u
$lsf_user
"
:
'';
my
$tee
=
$bacct_source_line
?
"
| tee
$bacct_source_line
"
:
'';
$bacct_source_line
=
"
bacct -l -C
$from_time
,
$to_time
$lsf_user
$tee
|
";
warn
'
Will run the following command to obtain
'
.
(
$tee
?
'
and dump
'
:
'')
.
"
bacct information: '
$bacct_source_line
' (may take a few minutes)
\n
";
}
my
$sth_replace
=
$dbc
->
prepare
(
'
REPLACE INTO lsf_report (process_id, status, mem, swap, exception_status) VALUES (?, ?, ?, ?, ?)
'
);
{
local
$/
=
"
------------------------------------------------------------------------------
\n\n
";
open
(
my
$bacct_f
ile
,
$infil
e
);
my
$record
=
<
$bacct_f
ile
>
;
# skip the header
my
$sth_replace
=
$dbc
->
prepare
(
'
REPLACE INTO lsf_report (process_id, status, mem, swap, exception_status) VALUES (?, ?, ?, ?, ?)
'
);
{
local
$/
=
"
------------------------------------------------------------------------------
\n\n
";
open
(
my
$bacct_f
h
,
$bacct_source_lin
e
);
my
$record
=
<
$bacct_f
h
>
;
# skip the header
for
my
$record
(
<
$bacct_f
ile
>
)
{
chomp
$record
;
for
my
$record
(
<
$bacct_f
h
>
)
{
chomp
$record
;
# warn "RECORD:\n$record";
# warn "RECORD:\n$record";
my
@lines
=
split
(
/\n/
,
$record
);
if
(
my
(
$process_id
)
=
$lines
[
0
]
=~
/^Job <(\d+(?:\[\d+\])?)>/
)
{
my
@lines
=
split
(
/\n/
,
$record
);
if
(
my
(
$process_id
)
=
$lines
[
0
]
=~
/^Job <(\d+(?:\[\d+\])?)>/
)
{
my
$exception_status
=
'';
foreach
(
@lines
)
{
if
(
/^\s*EXCEPTION STATUS:\s*(.*?)\s*$/
)
{
$exception_status
=
$
1
;
$exception_status
=~
s/\s+/;/g
;
my
$exception_status
=
'';
foreach
(
@lines
)
{
if
(
/^\s*EXCEPTION STATUS:\s*(.*?)\s*$/
)
{
$exception_status
=
$
1
;
$exception_status
=~
s/\s+/;/g
;
}
}
}
my
(
@keys
)
=
split
(
/\s+/
,
$lines
[
@lines
-
2
]);
my
(
@values
)
=
split
(
/\s+/
,
$lines
[
@lines
-
1
]);
my
%usage
=
map
{
(
$keys
[
$_
]
=>
$values
[
$_
])
}
(
0
..
@keys
-
1
);
my
(
@keys
)
=
split
(
/\s+/
,
$lines
[
@lines
-
2
]);
my
(
@values
)
=
split
(
/\s+/
,
$lines
[
@lines
-
1
]);
my
%usage
=
map
{
(
$keys
[
$_
]
=>
$values
[
$_
])
}
(
0
..
@keys
-
1
);
my
(
$mem
)
=
$usage
{
MEM
}
=~
/^(\d+)[KMG]$/
;
my
(
$swap
)
=
$usage
{
SWAP
}
=~
/^(\d+)[KMG]$/
;
my
(
$mem
)
=
$usage
{
MEM
}
=~
/^(\d+)[KMG]$/
;
my
(
$swap
)
=
$usage
{
SWAP
}
=~
/^(\d+)[KMG]$/
;
#warn "PROC_ID=$process_id, STATUS=$usage{STATUS}, MEM=$usage{MEM}, SWAP=$usage{SWAP}, EXC_STATUS='$exception_status'\n";
$sth_replace
->
execute
(
$process_id
,
$usage
{
STATUS
},
$mem
,
$swap
,
$exception_status
);
#warn "PROC_ID=$process_id, STATUS=$usage{STATUS}, MEM=$usage{MEM}, SWAP=$usage{SWAP}, EXC_STATUS='$exception_status'\n";
$sth_replace
->
execute
(
$process_id
,
$usage
{
STATUS
},
$mem
,
$swap
,
$exception_status
);
}
}
close
$bacct_fh
;
}
$sth_replace
->
finish
();
warn
"
\n
Report has been loaded into pipeline's lsf_report table. Enjoy.
\n
";
close
$bacct_file
;
}
$sth_replace
->
finish
();
warn
"
\n
Report has been loaded into pipeline's lsf_report table. Enjoy.
\n
";
__DATA__
=pod
=head1 NAME
lsf_report.pl
=head1 DESCRIPTION
This script is used for offline examination of resources used by a Hive pipeline running on LSF
(the script is [Pp]latform-dependent).
Based on the start time of the first worker and end time of the last worker (as recorded in pipeline DB)
it pulls the relevant data out of LSF's 'bacct' database, parses it and stores in 'lsf_report' table.
You can join this table to 'worker' table USING(process_id) in the usual MySQL way
to filter by analysis_id, do various stats, etc.
You can optionally ask the script to dump the 'bacct' database in a dump file,
or fill in the 'lsf_report' table from an existing dump file (most time is taken by querying bacct).
Please note the script may additionally pull information about LSF processes that you ran simultaneously
with running the pipeline. It is easy to ignore them by joining into 'worker' table.
=head1 USAGE EXAMPLES
# Just run it the usual way: query 'bacct' and load the relevant data into 'lsf_report' table:
lsf_report.pl -url mysql://username:secret@hostname:port/long_mult_test
# The same, but assuming LSF user someone_else ran the pipeline:
lsf_report.pl -url mysql://username:secret@hostname:port/long_mult_test -lsf_user someone_else
# Assuming the dump file existed. Load the dumped bacct data into 'lsf_report' table:
lsf_report.pl -url mysql://username:secret@hostname:port/long_mult_test -dump long_mult.bacct
# Assuming the dump file did not exist. Query 'bacct', dump the data into a file and load it into 'lsf_report':
lsf_report.pl -url mysql://username:secret@hostname:port/long_mult_test -dump long_mult_again.bacct
=head1 OPTIONS
-help : print this help
-url <url string> : url defining where hive database is located
-dump <filename> : a filename for bacct dump. It will be read from if the file exists, and written to otherwise.
-lsf_user <username> : if it wasn't you who ran the pipeline, LSF user name of that user can be provided
=head1 CONTACT
Please contact ehive-users@ebi.ac.uk mailing list with questions/suggestions.
=cut
This diff is collapsed.
Click to expand it.
Preview
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment