How to Combine Data from Different Bioinformatics Databases for Better Results

Bioinformatics relies heavily on databases to store and manage the vast amounts of biological data generated by research. However, to obtain more comprehensive and accurate insights, researchers often need to combine data from different bioinformatics databases. This process, known as data integration, can help in creating a more complete picture of biological processes and in making more informed decisions in research. This guide will cover the basic principles and steps involved in combining data from multiple bioinformatics databases.

What is bioinformatics?

Understanding the Importance of Data Integration

In bioinformatics, no single database can capture all the information needed for complex analyses. For instance, a researcher studying a gene's role in a disease might need genetic information from GenBank, protein data from UniProt, and structural details from the Protein Data Bank (PDB). By integrating these datasets, the researcher can correlate genetic mutations with changes in protein function or structure, leading to better insights into the disease mechanism.

Data Integration

Data integration enables:

  • Comprehensive Analysis: Combining datasets provides a more complete view of biological systems.
  • Improved Accuracy: Cross-referencing data from different sources helps validate findings.
  • Efficient Research: Integrating data reduces the time spent manually searching and correlating information.

Step 1: Identify Relevant Databases

The first step in data integration is to identify the databases that contain the information you need. Here are some common types of bioinformatics databases:

  • Nucleotide Sequence Databases: GenBank, EMBL, DDBJ.
  • Protein Sequence and Function Databases: UniProt, Pfam, InterPro.
  • 3D Structure Databases: PDB, SCOP.
  • Gene Expression Databases: GEO, ArrayExpress.
  • Pathway and Interaction Databases: KEGG, Reactome.

Depending on your research question, you may need to pull data from two or more of these databases. For example, if you’re studying a metabolic pathway, you might combine gene data from Ensembl with pathway information from KEGG.

Step 2: Ensure Data Compatibility

Different databases may use varying formats, identifiers, and ontologies, which can make data integration challenging. Before combining data, it is essential to ensure that the data are compatible. Key considerations include:

  • File Formats: Common file formats include FASTA (for sequences), PDB (for 3D structures), and GFF (for gene annotations). Use conversion tools if necessary to standardize formats.
  • Identifiers: Databases may use different identifiers for the same entities (e.g., genes or proteins). Tools like BioMart or the UniProt ID mapping service can help convert identifiers across databases.
  • Ontology Standards: Ensure that the data adhere to the same ontology standards (e.g., Gene Ontology, Disease Ontology) to maintain consistency in data interpretation.

Step 3: Retrieve Data

Once you've identified the relevant databases and ensured compatibility, the next step is to retrieve the data. Most bioinformatics databases provide several ways to access their data:

  • Web Interfaces: Databases like UniProt and NCBI offer web interfaces where you can manually search for and download data.
  • APIs (Application Programming Interfaces): APIs allow programmatic access to data, enabling automated retrieval of large datasets. For example, the NCBI E-utilities and Ensembl REST API are commonly used for this purpose.
  • Bulk Downloads: Some databases offer bulk download options for large datasets. This can be useful if you need extensive data or want to create a local copy for faster access.

Step 4: Combine and Integrate Data

With the data retrieved, the next step is to combine it into a single, unified dataset. There are several approaches to data integration:

  • Manual Integration: For smaller datasets, you can manually combine data using spreadsheet software like Excel. This method is simple but may not scale well for large datasets.
  • Scripting and Programming: For larger datasets, scripting languages like Python or R are commonly used. Libraries such as Pandas (Python) or data.table (R) can efficiently handle large datasets and merge them based on common identifiers.
    • Example: Using Python and Pandas to merge gene expression data from GEO with protein data from UniProt based on common gene identifiers.
  • Database Systems: For very large datasets, relational databases like MySQL or PostgreSQL can be used to store and query integrated data. Tools like SQL allow for complex queries across multiple tables, enabling more sophisticated data integration.
    • Example: Setting up a relational database that stores gene sequences from GenBank and protein structures from PDB, allowing for complex queries that link sequence data to structural information.

Step 5: Analyze Integrated Data

After integration, the data can be analyzed to answer specific research questions. This analysis may involve:

  • Statistical Analysis: Use tools like R or Python’s SciPy library to perform statistical tests on the integrated data, such as correlation analysis or regression modeling.
  • Visualization: Data visualization tools like R’s ggplot2 or Python’s Matplotlib can help in interpreting the data. For example, visualizing the relationship between gene expression levels and protein modifications across different conditions.
  • Bioinformatics Tools: Specialized bioinformatics tools and platforms (e.g., Cytoscape for network analysis, GSEA for gene set enrichment analysis) can be used to perform more specific analyses on the integrated data.

Step 6: Validate and Interpret Results

The final step is to validate and interpret the results of your analysis. This involves:

  • Cross-Verification: Use independent datasets or additional databases to cross-check your results. For instance, validating gene-protein interactions found in your integrated dataset by referencing databases like STRING or BioGRID.
  • Biological Interpretation: Relate the findings to biological processes or pathways. Tools like DAVID or KEGG Pathway Mapper can help contextualize results within known biological frameworks.
  • Reproducibility: Document the data integration process and analysis steps clearly to ensure that the results can be reproduced by other researchers.


Combining data from different bioinformatics databases is a powerful approach that can provide more comprehensive insights into biological questions. By following a systematic process—identifying relevant databases, ensuring data compatibility, retrieving and integrating data, and analyzing the results—researchers can enhance the accuracy and scope of their studies. Whether for gene function analysis, protein interaction studies, or pathway mapping, data integration is a key technique that supports more informed and reliable research outcomes.

Key Biological Databases Every Researcher Should Know