eHive_install_usage.txt 4.49 KB
Newer Older
1 2 3 4
	eHive installation, setup and usage.

1. Download and install the necessary external software:

5
1.1. Perl 5.10 or higher, since eHive code is written in Perl
6

7
	see http://www.perl.com/download.csp
8

9
1.2. A database engine of your choice
10

11 12 13 14 15 16 17
	eHive keeps its state in a database, so you will need
	(1) a server installed on the machine where you want to maintain the state and
	(2) clients installed on the machines where the jobs are to be executed.

	At the moment, the following engines are supported:

1.2.1. MySQL 5.1 or higher
18 19 20

	see http://dev.mysql.com/downloads/

21 22 23 24 25 26 27 28
1.2.2. SQLite 3.6 or higher

	see http://www.sqlite.org/download.html

1.2.3. PostgreSQL 9.2 or higher

	see http://www.postgresql.org/download/

29
1.3. Perl DBI API version 1.6 or higher
30 31 32 33 34
	Perl database interface that includes API to your desired database engine

	see http://dbi.perl.org/

1.4. Perl libraries for visualisation (optional)
35

36 37 38
	You can find them on CPAN:
	 - GraphViz         (needed for generate_graph.pl)
	 - Chart::Gnuplot   (needed for generate_timeline.pl)
39 40


41
2. Download and install essential repositories from EnsEMBL @ GitHub
42 43

2.1. It is advised to have a dedicated directory where EnsEMBL-related packages will be deployed.
44
Unlike DBI modules that can be installed system-wide by the system administrator,
45 46
you will benefit from full (read+write) access to the EnsEMBL files/directories,
so it is best to install them under your home directory. For example,
47

48 49 50 51 52
	$ mkdir $HOME/ensembl_main

It will be convenient to set a variable pointing at this directory for future use:

# using bash syntax:
53
	$ export ENSEMBL_CVS_ROOT_DIR="$HOME/ensembl_main"
54 55 56 57
		#
		# (for best results, append this line to your ~/.bashrc configuration file)

# using [t]csh syntax:
58
	$ setenv ENSEMBL_CVS_ROOT_DIR "$HOME/ensembl_main"
59 60 61 62 63
		#
		# (for best results, append this line to your ~/.cshrc or ~/.tcshrc configuration file)

2.2. Change into your ensembl codebase directory:

64
	$ cd $ENSEMBL_CVS_ROOT_DIR
65

66
2.3. Check out the Ensembl repositories by cloning them from GitHub:
67

68 69
    git clone https://github.com/Ensembl/ensembl.git        # core API
    git clone https://github.com/Ensembl/ensembl-hive.git   # Ensembl Hive
70

71
2.4. Add new packages to the PERL5LIB variable:
72 73

# using bash syntax:
74 75
	$ export PERL5LIB=${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl/modules
	$ export PERL5LIB=${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl-hive/modules
76 77 78 79
		#
		# (for best results, append these lines to your ~/.bashrc configuration file)

# using [t]csh syntax:
80 81
	$ setenv PERL5LIB  ${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl/modules
	$ setenv PERL5LIB  ${PERL5LIB}:${ENSEMBL_CVS_ROOT_DIR}/ensembl-hive/modules
82 83 84 85
		#
		# (for best results, append these lines to your ~/.cshrc or ~/.tcshrc configuration file)


86
3. Useful files and directories of the eHive repository.
87

88 89
3.1 In ensembl-hive/scripts we keep perl scripts used for controlling the pipelines.
    Adding this directory to your $PATH may make your life easier.
90

91 92
    * init_pipeline.pl is used to create hive databases, populate hive-specific and pipeline-specific tables and load data

93 94
    * seed_pipeline.pl is used to add new jobs to the pipeline.

95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115
    * beekeeper.pl is used to run the pipeline; send 'Workers' to the 'Meadow' to run the jobs of the pipeline

3.2 In ensembl-hive/modules/Bio/EnsEMBL/Hive/PipeConfig we keep example pipeline configuration modules that can be used by init_pipeline.pl .
    A PipeConfig is a parametric module that defines the structure of the pipeline.
    That is, which analyses with what parameters will have to be run and in which order.
    The code for each analysis is contained in a RunnableDB module.
    For some tasks bespoke RunnableDB have to be written, whereas some other problems can be solved by only using 'universal buliding blocks'.
    A typical pipeline is a mixture of both.

3.3 In ensembl-hive/modules/Bio/EnsEMBL/Hive/RunnableDB we keep 'universal building block' RunnableDBs:

    * SystemCmd.pm  is a parameter substitution wrapper for any command line executed by the current shell

    * SqlCmd.pm     is a parameter substitution wrapper for running any MySQL query or a session of linked queries
                    against a particular database (eHive pipeline database by default, but not necessarily)

    * JobFactory.pm is a universal module for dynamically creating batches of same analysis jobs (with different parameters)
                    to be run within the current pipeline

3.4 In ensembl-hive/modules/Bio/EnsEMBL/Hive/RunnableDB/LongMult we keep bespoke RunnableDBs for long multiplication example pipeline.