Important Dates

Workshop: October 9-10, 2013

Full-length paper submission deadline: October 18, 2013

 

Fourth Workshop on Big Data Benchmarking

The Fourth Workshop on Big Data Benchmarking (4th WBDB) will be held on October 9-10, 2013 in San Jose, CA, at the Brocade IMC Theatre, Building 3, 130 Holger Way, San Jose, Ca.

The objective of the WBDB workshops is to make progress towards development of industry standard application-level benchmarks for evaluating hardware and software systems for big data applications.

To be successful, a benchmark should be:

  • Simple to implement and execute;
  • Cost effective, so that the benefits of executing the benchmark justify its expense;
  • Timely, with benchmark versions keeping pace with rapid changes in the marketplace; and
  • Verifiable so that results of the benchmark can be validated via independent means.

Based on discussions at the previous big data benchmarking workshops, two benchmark proposals are currently under consideration. One called BigBench (to appear in ACM SIGMOD Conference 2013), is based on extending the Transaction Processing Performance Council's Decision Support benchmark (TPC-DS) with semi-structured and unstructured data and new queries targeted at those data. A second is based on a Deep Analytics Pipeline for event processing (see http://cc.readytalk.com/play?id=1hws7t).

Topics

To make progress towards a big data benchmarking standard, the workshop will explore a  range of issues including:

  • Data features: New feature sets of data including, high-dimensional data, sparse data, event-based data, and enormous data sizes.
  • System characteristics: System-level issues including, large-scale and evolving system configurations, shifting loads, and heterogeneous technologies for big data and cloud platforms.
  • Implementation options: Different implementation options such as SQL, NoSQL, Hadoop software ecosystem, and different implementations of HDFS.
  • Workload: Representative big data business problems and corresponding benchmark implementations. Specification of benchmark applications that represent the different modalities of big data, including graphs, streams, scientific data, and document collections.
  • Hardware options: Evaluation of new options in hardware including different types of HDD, SSD, and main memory, and large-memory systems, and new platform options that include dedicated commodity clusters and cloud platforms.
  • Synthetic data generation: Models and procedures for generating large-scale synthetic data with requisite properties.
  • Benchmark execution rules: E.g. data scale factors, benchmark versioning to account for rapidly evolving workloads and system configurations, benchmark metrics.
  • Metrics for efficiency: Measuring the efficiency of the solution, e.g. based on costs of acquisition, ownership, energy and/or other factors, while encouraging innovation and avoiding benchmark escalations that favor large inefficient configuration over small efficient configurations.
  • Evaluation frameworks: Tool chains, suites and frameworks for evaluating big data systems.           
  • Early implementations: Of the Deep Analytics Pipeline or BigBench and lessons learned in benchmarking big data applications.
  • Enhancements: Proposals to augment these benchmarks, e.g. by  adding more data genres (e.g. graphs), or incorporating a range of machine learning and other algorithms, will be entertained and are encouraged.

CfP: Call for Papers

Related Links

First Workshop on Big Data Benchmarks, Performance Optimziation, and Emergeing Hardware (BPOE 2013), October 6, 2013, San Jose, CA.

Third WBDB, July 2013, Xi'an, China.

Second WBDB, December 2012, Pune, India.

First WBDB, May 2012, San Jose, CA.