Skip to content

EBSTSplitter

iSOUP-Tree's Extended Binary Search Tree (E-BST).

This class implements the Extended Binary Search Tree1 (E-BST) structure, using the variant employed by Osojnik et al.2 in the iSOUP-Tree algorithm. This structure is employed to observe the target space distribution.

Proposed along with Fast Incremental Model Tree with Drift Detection1 (FIMT-DD), E-BST was the first attribute observer (AO) proposed for incremental Hoeffding Tree regressors. This AO works by storing all observations between splits in an extended binary search tree structure. E-BST stores the input feature realizations and statistics of the target(s) that enable calculating the split heuristic at any time. To alleviate time and memory costs, E-BST implements a memory management routine, where the worst split candidates are pruned from the binary tree.

In this variant, only the left branch statistics are stored and the complete split-enabling statistics are calculated with an in-order traversal of the binary search tree.

Attributes

  • is_numeric

    Determine whether or not the splitter works with numerical features.

  • is_target_class

    Check on which kind of learning task the splitter is designed to work. If True, the splitter works with classification trees, otherwise it is designed for regression trees.

Methods

best_evaluated_split_suggestion

Get the best split suggestion given a criterion and the target's statistics.

Parameters

  • criterion (river.tree.split_criterion.base.SplitCriterion)
  • pre_split_dist (Union[List, Dict])
  • att_idx (Hashable)
  • binary_only (bool) โ€“ defaults to True

Returns

BranchFactory: Suggestion of the best attribute split.

clone

Return a fresh estimator with the same parameters.

The clone has the same parameters but has not been updated with any data. This works by looking at the parameters from the class signature. Each parameter is either - recursively cloned if it's a River classes. - deep-copied via copy.deepcopy if not. If the calling object is stochastic (i.e. it accepts a seed parameter) and has not been seeded, then the clone will not be idempotent. Indeed, this method's purpose if simply to return a new instance with the same input parameters.

cond_proba

Not implemented in regression splitters.

Parameters

  • att_val
  • target_val (Union[bool, str, int])
remove_bad_splits

Remove bad splits.

Based on FIMT-DD's 1 procedure to remove bad split candidates from the E-BST. This mechanism is triggered every time a split attempt fails. The rationale is to remove points whose split merit is much worse than the best candidate overall (for which the growth decision already failed). Let \(m_1\) be the merit of the best split point and \(m_2\) be the merit of the second best split candidate. The ratio \(r = m_2/m_1\) along with the Hoeffding bound (\(\epsilon\)) are used to decide upon creating a split. A split occurs when \(r < 1 - \epsilon\). A split candidate, with merit \(m_i\), is considered badr if \(m_i / m_1 < r - 2\epsilon\). The rationale is the following: if the merit ratio for this point is smaller than the lower bound of \(r\), then the true merit of that split relative to the best one is small. Hence, this candidate can be safely removed. To avoid excessive and costly manipulations of the E-BST to update the stored statistics, only the nodes whose children are all bad split points are pruned, as defined in 1.

Parameters

  • criterion
  • last_check_ratio (float)
  • last_check_vr (float)
  • last_check_e (float)
  • pre_split_dist (Union[List, Dict])
update

Update statistics of this observer given an attribute value, its target value and the weight of the instance observed.

Parameters

  • att_val
  • target_val (Union[bool, str, int, numbers.Number])
  • sample_weight (float)

References


  1. Ikonomovska, E., Gama, J., & Dลพeroski, S. (2011). Learning model trees from evolving data streams. Data mining and knowledge discovery, 23(1), 128-168. 

  2. Osojnik, Aljaลพ. 2017. Structured output prediction on Data Streams (Doctoral Dissertation)