enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Apache Hadoop - Wikipedia

    en.wikipedia.org/wiki/Apache_Hadoop

    Apache Hadoop (/ h ə ˈ d uː p /) is a collection of open-source software utilities for reliable, scalable, distributed computing.It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.

  3. Apache Phoenix - Wikipedia

    en.wikipedia.org/wiki/Apache_Phoenix

    Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing store. Phoenix provides a JDBC driver that hides the intricacies of the NoSQL store enabling users to create, delete, and alter SQL tables, views, indexes, and sequences; insert and delete rows singly and in bulk; and query data through SQL. [1]

  4. List of Apache Software Foundation projects - Wikipedia

    en.wikipedia.org/wiki/List_of_Apache_Software...

    Kibble: a suite of tools for collecting, aggregating and visualizing activity in software projects. Knox: a REST API Gateway for Hadoop Services; Kudu: a distributed columnar storage engine built for the Apache Hadoop ecosystem; Kvrocks: a distributed key-value NoSQL database, supporting the rich data structure; Kylin: distributed analytics engine

  5. Apache Kudu - Wikipedia

    en.wikipedia.org/wiki/Apache_Kudu

    Apache Kudu is a free and open source column-oriented data store of the Apache Hadoop ecosystem. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides completeness to Hadoop's storage layer to enable fast analytics on fast data.

  6. Apache Avro - Wikipedia

    en.wikipedia.org/wiki/Apache_Avro

    Avro is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.

  7. Cascading (software) - Wikipedia

    en.wikipedia.org/wiki/Cascading_(software)

    Cascading is a software abstraction layer for Apache Hadoop and Apache Flink. Cascading is used to create and execute complex data processing workflows on a Hadoop cluster using any JVM-based language (Java, JRuby, Clojure, etc.), hiding the underlying complexity of MapReduce jobs. It is open source and available under the Apache License.

  8. Apache HBase - Wikipedia

    en.wikipedia.org/wiki/Apache_HBase

    HBase is an open-source non-relational distributed database modeled after Google's Bigtable and written in Java.It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS (Hadoop Distributed File System) or Alluxio, providing Bigtable-like capabilities for Hadoop.

  9. Apache Parquet - Wikipedia

    en.wikipedia.org/wiki/Apache_Parquet

    Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. It is similar to RCFile and ORC, the other columnar-storage file formats in Hadoop, and is compatible with most of the data processing frameworks around Hadoop.