This project is based on the assumption that ingesting large amounts of data into an application is best accomplished by using a staging area to quickly capture, cleanse, and organize data before loading it into an operational database (like an SQL DBMS) for permanent storage. This stems from the impact of large amounts of information and the relationships between them have on performance and operational efficiency.
One solution is to develop an extraction, transformation, and load (ETL) process that adds the raw data to a staging area, like a MongoDB database, without regard to data quality or their relationships. Once in the staging area, data can be reviewed and cleansed before moving it to a permanent home such as a Postgres SQL database. This strategy can be implemented to encompass two distinct load processes — an initial one-time bulk load and a periodic load of new data.
You can learn more about this effort by reading the series of articles on Medium:
This application consists of a React front end and an Apollo Server backend,
both of which must be running in order to use the application. Start by opening
two terminal sessions, one to climateexplorer/client
and the other to
climateexplorer/server
.
To start the frontend application issue the command npm start
from the
client
directory. Similarly, issue the command npm start
from the server
directory to start the backend.
The architecture of the Climate Explorer application is shown in the following diagram.
Documentation on the frontend and backend parts of the application are located at the following locations:
TBD
For more information see Change Log
See Contributing and our Collaborator Guide.
Developers on this project can be found on the Contributors page of this repo.