Clustering

Clustering

1. Overview

1.1. General problem statement

1.2. Clustering is a hard problem

???example “Clustering Sky Objects” - A catalog of 2 billion sky objects represents objects by their radiation in 7 dimensions (frequency bands) - Problem: cluster into similar objects, e.g., galaxies, stars, quasars, etc.

1
2
3
4
<figure markdown="span">
    ![](fig/08-clustering/sn_gallery24.jpg)
    <figcaption>36 of the 500+ Type Ia supernovae discovered by the Sloan Supernova Survey</figcaption>
</figure>

???example “Clustering music albums” - Music divides into categories, and customer prefer a few categories - Are categories simply genres? - Represent an album by a set of customers who bought it - Similar albums have similar sets of customers, and vice-versa - Space of all albums: - Think of a space with one dimension for each customer - Values in a dimension may be 0 or 1 only ???abstract “Data representation” - An album is a point in this space $(x_1, x_2, …,x_k)$ where $x_i = 1$ if and only if the $i^{th}$ customer bought the CD. - For Amazon, the dimension is tens of millions - Find clusters of similar CDs

???example “Clustering documents” - Finding topics - Group together documents on the same topic. - Documents with similar sets of words maybe about the same topic. - Dual formulation: a topic is a group of words that co-occur in many documents. ???abstract “Data representation” - Represent a document by a vector $(x_1, x_2, …,x_k)$ where $x_i = 1$ if and only if the $i^{th}$ word (in some order) appears in the document. - Document with similar sets of words may be about the same topic.

1.3. Distance Measurements: Cosine, Jaccard, Euclidean

1.4. Overview: methods of clustering

=== “Hierarchical” Hierrarchical{ align=left width=300 loading=lazy }

1
2
3
4
5
- Agglomerative (bottom up): each point is a cluster, 
repeatedly combining two nearest cluster.
- Divisive (top down): start with one cluster and 
    recursively split it. 
- Key operation: repeatedly combine two nearest clusters. 

=== “Point assignment” Point assignment{ align=left width=300 height=200 loading=lazy }

1
2
- Maintain a set of clusters
- Points belong to `nearest` cluster

2. K-means clustering

2.1. Overview

2.2. Populating clusters

  1. For each point, place it in the cluster whose current centroid it is nearest.
    • A cluster centroid has its coordinates calculated as the averages of all its points’ coordinates.
  2. After all points are assigned, update the locations of centroids of the k clusters.
  3. Reassign all points to their closest centroid.

2.3. How to select k?

???example “Visual example”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
???warning "Too few"

    ![Too few](fig/08-clustering/kmean_few.png)

    - Many long distances to centroid

???success "Just right"

    ![Just right](fig/08-clustering/kmean_right.png)

    - Distances are relatively short

???warning "Too many"

    ![Too many](fig/08-clustering/kmean_many.png)

    - Little improvement in avaerage distance

3. K-means on big data: BFR (Bradley-Fayyad-Reina)

3.1. BFR: Bradley-Fayyad-Reina

???abstract “BFR” - Points are read from disk one main-memory-full at a time - Most points from previous memory loads are summarized by simple statistics - To begin, from the initial load we select the initial k centroids by some sensible approach: - Take k random points - Take a small random sample and cluster optimally - Take a sample; pick a random point, and then k-1 more points, each as far from the previously selected points as possible

3.2. Three classes of points

Point classes

???info “Discard set (DS)” - Points close enough to a centroid to be summarized

???info “Compression set (CS)” - Groups of points that are close together but not close to any existing centroid - These points are summarized, but not assigned to a cluster

???info “Retained set (RS)” - Isolated points waiting to be assigned to a compression set

3.3. Discard set (DS)

3.4. Processing the Memory-Load of points

  1. Start out with a selection of k centroids.
  2. Find those points that are “sufficiently close” to a cluster centroid and add those points to that cluster and the DS
    • These points are so close to the centroid that they can be summarized and then discarded
  3. Use any main-memory clustering algorithm to cluster the remaining points and the old RS
    • Clusters go to the CS; outlying points to the RS
  4. DS set: Adjust statistics of the clusters to account for the new points
    • Add Ns, SUMs, SUMSQs
  5. Consider merging compressed sets in the CS
  6. If this is the last round, merge all compressed sets in the CS and all RS points into their nearest cluster

???tip “1. How do we decide if a point is “close enough” to a cluster that we will add the point to that cluster?” BFR suggests two approaches

1
2
3
4
5
6
7
8
- The Mahalanobis distance is less than a threshold
    - Normalized Euclidean distance from centroid
    - For point $(x_1,x_2,...,x_d)$ and centroid $(c_1,c_2,...,c_d)$
        - Normalize in each dimension: $y_i = \frac{x_i - c_i}{\sigma_i}$
        - ${\sigma_i}$: standard deviation of points in the cluster in the $i^{th}$ dimension
        - Take sum of the squares of the $y_i$
        - Take the square root
- High likelihood of the point belonging to currently nearest centroid

???tip “2. How do we decide whether two compressed sets (CS) deserve to be combined into one?” - Compute the variance of the combined subcluster - N, SUM, and SUMSQ allow us to make that calculation quickly - Combine if the combined variance is below some threshold - Many alternatives: Treat dimensions differently, consider density


4. Improvement on BFR: CURE

4.1. Overview

4.2. Two-pass algorithm


5. Clustering on Spark