Header Home Home Research News Events About CWI Library Publications Home Contact Intranet Search
author       = {Kersten, M. L. and Idreos, S. and Manegold, S. and Liarou, E.},
title        = {The {Researcher}’s {Guide} {To} {The} {Data} {Deluge}: {Querying} {A} {Scientific} {Database} {In} {Just} {A} {Few} {Seconds}},
booktitle    = {Proceedings of International Conference on Very Large Data Bases 2011 (VLDB)},
conferencetitle    = {International Conference on Very Large Databases},
conferencedate     = {2011, August 29 - September 1},
conferencelocation = {Seattle, WA, USA},
pages        = {585 - 597},
year         = {2011},
note         = {Challenges & Visions Track Best Paper Award.},
refereed     = {y},
size         = {13p.},
group        = {INS1},
language     = {en},
project      = {Non-NWO Project 1:[MonetDB ()]},

Notice: Undefined variable: plaintxt in /srv/www/vhosts/lib/general/search/bibtex1.php on line 264
abstract     = {There is a clear need nowadays for extremely large data processing.
This is especially true in the area
 of scientific data management where soon we expect
data inputs in the order of multiple Petabytes.
However, current data
 management technology is not suitable for such data sizes.

In the light of such new database applications, we can rethink
 some of the strict
requirements database systems adopted in the past.
We argue that correctness is such a critical property,
 responsible for performance degradation.
In this paper, we propose a new paradigm towards building database kernels
 may produce \emph{wrong but fast, cheap and indicative} results.
Fast response times is an essential component of data
 analysis for exploratory applications;
allowing for fast queries enables
the user to develop a ``feeling" for the data
 through a series of ``painless" queries which eventually leads
to more detailed analysis in a targeted data area.

 propose a research path where a database kernel autonomously and on-the-fly
decides to reduce the processing requirements
 of a running query
based on workload, hardware and
environmental parameters.
It requires a complete redesign of database
and query processing strategy.
For example, typical and very common scenarios were query processing performance
 degrades significantly
are cases where a database  operator has to spill data
to disk, or is forced to perform random
 access, or has to follow long linked lists, etc.
Here we ask the question: What if we simply avoid these steps, ``ignoring"
 the side-effect
in the correctness of the result?
url          = {http://oai.cwi.nl/oai/asset/18546/18546B.pdf},


Feedback | CWI Home page