scale - Implementing a scale

According to On the theory of scales of measurement by S.S. Stevens, scales can be classified in four ways -- nominal, ordinal, interval and ratio. Using current(2016) terminology, nominal data is made up of unordered categories, ordinal data is made up of ordered categories and the two can be classified as discrete. On the other hand both interval and ratio data are continuous.

The scale classes below show how the rest of the Mizani package can be used to implement the two categories of scales. The key tasks are training and mapping and these correspond to the train and map methods.

To train a scale on data means, to make the scale learn the limits of the data. This is elaborate (or worthy of a dedicated method) for two reasons:

  • Practical -- data may be split up across more than one object, yet all will be represented by a single scale.

  • Conceptual -- training is a key action that may need to be inserted into multiple locations of the data processing pipeline before a graphic can be created.

To map data onto a scale means, to associate data values with values(potential readings) on a scale. This is perhaps the most important concept unpinning a scale.

The apply methods are simple examples of how to put it all together.

class mizani.scale.scale_continuous[source]

Continuous scale

classmethod apply(x: FloatArrayLike, palette: ContinuousPalette, na_value: Any = None, trans: Trans | None = None) NDArrayFloat[source]

Scale data continuously

Parameters:
xarray_like

Continuous values to scale

palettecallable() f(x)

Palette to use

na_valueobject

Value to use for missing values.

transtrans

How to transform the data before scaling. If None, no transformation is done.

Returns:
outarray_like

Scaled values

classmethod train(new_data: FloatArrayLike, old: TupleFloat2 | None = None) TupleFloat2[source]

Train a continuous scale

Parameters:
new_dataarray_like

New values

oldarray_like

Old range

Returns:
outtuple

Limits(range) of the scale

classmethod map(x: FloatArrayLike, palette: ContinuousPalette, limits: TupleFloat2, na_value: Any = None, oob: Callable[[TVector], TVector] = <function censor>) NDArrayFloat[source]

Map values to a continuous palette

Parameters:
xarray_like

Continuous values to scale

palettecallable() f(x)

palette to use

na_valueobject

Value to use for missing values.

oobcallable() f(x)

Function to deal with values that are beyond the limits

Returns:
outarray_like

Values mapped onto a palette

class mizani.scale.scale_discrete[source]

Discrete scale

classmethod apply(x: AnyArrayLike, palette: DiscretePalette, na_value: Any = None)[source]

Scale data discretely

Parameters:
xarray_like

Discrete values to scale

palettecallable() f(x)

Palette to use

na_valueobject

Value to use for missing values.

Returns:
outarray_like

Scaled values

classmethod train(new_data: AnyArrayLike, old: Sequence[Any] | None = None, drop: bool = False, na_rm: bool = False) Sequence[Any][source]

Train a continuous scale

Parameters:
new_dataarray_like

New values

oldarray_like

Old range. List of values known to the scale.

dropbool

Whether to drop(not include) unused categories

na_rmbool

If True, remove missing values. Missing values are either NaN or None.

Returns:
outlist

Values covered by the scale

classmethod map(x: AnyArrayLike, palette: DiscretePalette, limits: Sequence[Any], na_value: Any = None) AnyArrayLike[source]

Map values to a discrete palette

Parameters:
palettecallable() f(x)

palette to use

xarray_like

Continuous values to scale

na_valueobject

Value to use for missing values.

Returns:
outarray_like

Values mapped onto a palette