kartothek.io.dask.bag module

kartothek.io.dask.bag.build_dataset_indices__bag(store: Optional[Union[str, simplekv.KeyValueStore, Callable[], simplekv.KeyValueStore]]], dataset_uuid: Optional[str], columns: Sequence[str], partition_size: Optional[int] = None, factory: Optional[kartothek.core.factory.DatasetFactory] = None)dask.delayed.Delayed[source]

Function which builds a ExplicitSecondaryIndex.

This function loads the dataset, computes the requested indices and writes the indices to the dataset. The dataset partitions itself are not mutated.

Parameters
  • store (Callable or str or simplekv.KeyValueStore) –

    The store where we can find or store the dataset.

    Can be either simplekv.KeyValueStore, a storefact store url or a generic Callable producing a simplekv.KeyValueStore

  • dataset_uuid (str) – The dataset UUID

  • columns (Optional[List[Dict[str]]]) – A dictionary mapping tables to list of columns. Only the specified columns are loaded for the corresponding table. If a specfied table or column is not present in the dataset, a ValueError is raised.

  • partition_size (Optional[int]) – Dask bag partition size. Use a larger numbers to decrease scheduler load and overhead, use smaller numbers for a fine-grained scheduling and better resilience against worker errors.

  • factory (kartothek.core.factory.DatasetFactory) – A DatasetFactory holding the store and UUID to the source dataset.

kartothek.io.dask.bag.read_dataset_as_dataframe_bag(dataset_uuid=None, store=None, tables=None, columns=None, concat_partitions_on_primary_index=False, predicate_pushdown_to_io=True, categoricals=None, label_filter=None, dates_as_object=False, predicates=None, factory=None, dispatch_by=None, partition_size=None)[source]

Retrieve data as dataframe from a dask.bag.Bag of MetaPartition objects

Parameters
  • dataset_uuid (str) – The dataset UUID

  • store (Callable or str or simplekv.KeyValueStore) –

    The store where we can find or store the dataset.

    Can be either simplekv.KeyValueStore, a storefact store url or a generic Callable producing a simplekv.KeyValueStore

  • tables (List[str]) – A list of tables to be loaded. If None is given, all tables of a partition are loaded

  • columns (Optional[List[Dict[str]]]) – A dictionary mapping tables to list of columns. Only the specified columns are loaded for the corresponding table. If a specfied table or column is not present in the dataset, a ValueError is raised.

  • concat_partitions_on_primary_index (bool) – Concatenate partition based on their primary index values.

  • predicate_pushdown_to_io (bool) – Push predicates through to the I/O layer, default True. Disable this if you see problems with predicate pushdown for the given file even if the file format supports it. Note that this option only hides problems in the storage layer that need to be addressed there.

  • categoricals (Dict[str, List[str]]) – A dictionary mapping tables to list of columns that should be loaded as category dtype instead of the inferred one.

  • label_filter (Callable) – A callable taking a partition label as a parameter and returns a boolean. The callable will be applied to the list of partitions during dispatch and will filter out all partitions for which the callable evaluates to False.

  • dates_as_object (bool) – Load pyarrow.date{32,64} columns as object columns in Pandas instead of using np.datetime64 to preserve their type. While this improves type-safety, this comes at a performance cost.

  • predicates (List[List[Tuple[str, str, Any]]) –

    Optional list of predicates, like [[(‘x’, ‘>’, 0), …], that are used to filter the resulting DataFrame, possibly using predicate pushdown, if supported by the file format. This parameter is not compatible with filter_query.

    Predicates are expressed in disjunctive normal form (DNF). This means that the innermost tuple describes a single column predicate. These inner predicates are all combined with a conjunction (AND) into a larger predicate. The most outer list then combines all predicates with a disjunction (OR). By this, we should be able to express all kinds of predicates that are possible using boolean logic.

    Available operators are: ==, !=, <=, >=, <, > and in.

    Filtering for missings is supported with operators ==, != and in and values np.nan and None for float and string columns respectively.

    Categorical data

    When using order sensitive operators on categorical data we will assume that the categories obey a lexicographical ordering. This filtering may result in less than optimal performance and may be slower than the evaluation on non-categorical data.

    See also Filtering / Predicate pushdown and Efficient Querying

  • factory (kartothek.core.factory.DatasetFactory) – A DatasetFactory holding the store and UUID to the source dataset.

  • dispatch_by (Optional[List[str]]) –

    List of index columns to group and partition the jobs by. There will be one job created for every observed index value combination. This may result in either many very small partitions or in few very large partitions, depending on the index you are using this on.

    Secondary indices

    This is also useable in combination with secondary indices where the physical file layout may not be aligned with the logically requested layout. For optimal performance it is recommended to use this for columns which can benefit from predicate pushdown since the jobs will fetch their data individually and will not shuffle data in memory / over network.

  • partition_size (Optional[int]) – Dask bag partition size. Use a larger numbers to decrease scheduler load and overhead, use smaller numbers for a fine-grained scheduling and better resilience against worker errors.

Returns

A dask.bag.Bag which contains the metapartitions and mapped to a function for retrieving the data.

Return type

dask.bag.Bag

kartothek.io.dask.bag.store_bag_as_dataset(bag, store, dataset_uuid=None, metadata=None, df_serializer=None, overwrite=False, metadata_merger=None, metadata_version=4, partition_on=None, metadata_storage_format='json', secondary_indices=None)[source]

Transform and store a dask.bag of dictionaries containing dataframes to a kartothek dataset in store.

This is the dask.bag-equivalent of store_delayed_as_dataset(). See there for more detailed documentation on the different possible input types.

Parameters
  • store (Callable or str or simplekv.KeyValueStore) –

    The store where we can find or store the dataset.

    Can be either simplekv.KeyValueStore, a storefact store url or a generic Callable producing a simplekv.KeyValueStore

  • dataset_uuid (str) – The dataset UUID

  • metadata (Optional[Dict]) – A dictionary used to update the dataset metadata.

  • df_serializer (Optional[kartothek.serialization.DataFrameSerializer]) – A pandas DataFrame serialiser from kartothek.serialization

  • overwrite (Optional[bool]) – If True, allow overwrite of an existing dataset.

  • metadata_merger (Optional[Callable]) – By default partition metadata is combined using the combine_metadata() function. You can supply a callable here that implements a custom merge operation on the metadata dictionaries (depending on the matches this might be more than two values).

  • metadata_version (Optional[int]) – The dataset metadata version

  • partition_on (List) – Column names by which the dataset should be partitioned by physically. These columns may later on be used as an Index to improve query performance. Partition columns need to be present in all dataset tables. Sensitive to ordering.

  • metadata_storage_format (str) – Optional list of datastorage format to use. Currently supported is .json & .msgpack.zstd”

  • secondary_indices (List[str]) – A list of columns for which a secondary index should be calculated.

  • bag (dask.bag.Bag) – A dask bag containing dictionaries of dataframes or dataframes.