Wednesday, 15. October 2014., 16:00
Hall 5 Apartmants
There are many ways to ingest (load) data into a Hadoop cluster, from file copying using the Hadoop Filesystem (FS) shell through to real-time streaming using technologies such as Flume and Hadoop streaming.
In this workshop we'll take a high-level look at the data ingestion options for Hadoop, and then show how Oracle Data Integrator and Oracle GoldenGate leverage these technologies to load and process data within your Hadoop cluster. We'll also consider the updated Oracle Information Management Reference Architecture and look at the best places to land and process your enterprise data, using Hadoop's schema-on-read approach to hold low-value, low-density raw data, and then use the concept of a "data factory" to load and process your data into more traditional Oracle relational storage, where we hold high-density, high-value data.