Big Data Beyond Hadoop

Apache Hadoop is the de facto open source standard for big data solutions. A challenging task is to send all data to Hadoop for processing and storage (and getting it back to your application later), because data comes from many different applications (SAP,, Oracle’s Siebel product family, and so on) and databases (file, SQL, or NoSQL), uses different technologies and concepts (such as HTTP, FTP, RMI, or JMS) for communication, and consists of different data formats (such as CSV, XML, or binary data).

This session shows how the powerful combination of Apache Hadoop and Apache Camel can help you handle this challenging task. Learn how to use every imaginable kind of data with Hadoop—without lots of complex or redundant boilerplate code.

Related Content: