This example shows how to pull data from a Hadoop (HDFS) instance and load into Socrata.
Hadoop (HDFS) The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
You can install Hadoop either locally or on a cloud service like Amazon EC2:
This example uses a local instance with one namenode
and one datanode
but is designed to be distributed.
Before continuing on, please follow the link, download, and configure whichever instance you will be running.
The first step is list the contents of your directory in HDFS to locate your file:
Download your data file from the HDFS filesystem system and copy it to local directory:
Once you have your file locally, you can publish it via Socrata DataSync, just like any other data file.
After you’re done, don’t forget to clean up the data file you downloaded: