Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
E
ensembl
Manage
Activity
Members
Labels
Plan
Issues
0
Issue boards
Milestones
Iterations
Wiki
Requirements
Jira
Code
Merge requests
1
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package Registry
Container Registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ensembl-gh-mirror
ensembl
Commits
85ff76b6
Commit
85ff76b6
authored
20 years ago
by
Ian Longden
Browse files
Options
Downloads
Patches
Plain Diff
-create option now in xref_parser.pl so tutorial now alot shorter.
parent
1ea136bf
No related branches found
Branches containing commit
No related tags found
Tags containing commit
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
misc-scripts/xref_mapping/README
+7
-24
7 additions, 24 deletions
misc-scripts/xref_mapping/README
with
7 additions
and
24 deletions
misc-scripts/xref_mapping/README
+
7
−
24
View file @
85ff76b6
...
...
@@ -45,31 +45,13 @@ ZFIN Zebrafish ZFINParser.pm
General Tutorial
First we need to create a database to store all the data in:-
mysql -hhost1 -P3350 -uadmin -ppassword -e"create database xref_store"
Now create the tables needed:-
mysql -hhost1 -P3350 -uadmin -ppassword -Dxref_store < sql/table.sql
Now populate the tables with the initial data on what species and sources
are available:-
mysql -hhost1 -P3350 -uadmin -ppassword -Dxref_store
< sql/populate_metadata.sql
To populate the database with the xref data you will need to run the
xref_parser.pl with the appropriate arguments. The script will create a
directory for each source you specify (or all) and download the data (unless
-skipdownload specified) before parsing them.
The perl script to create and populate the database is xref_parser.pl
xref_parser --help produces:-
xref_parser.p
m
-user {user} -pass {password} -host {host} -port {port}
xref_parser.p
l
-user {user} -pass {password} -host {host} -port {port}
-dbname {database} -species {species1,species2}
-source {source1,source2} -skipdownload
-source {source1,source2} -skipdownload
-create
If no source is specified then then all source are loaded. The same is done for
species so it is best to specify this one or the script may take a while.
...
...
@@ -78,12 +60,13 @@ species so it is best to specify this one or the script may take a while.
So to load/parse all the xrefs for the human the command would be:-
xref_parser.pm -host host1 -port 3350 -user admin -pass password
-dbname xref_store -species human
-dbname xref_store -species human -create
So we now have a set of xrefs that are dependent on the uniprot and refseq
entries loaded.
These can then be mapped to the ENSEMBL entitys with the
xref_mapper.pl script.
entries loaded.
These can then be mapped to the ENSEMBL entitys with the
xref_mapper.pl script.
The parsers.
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment