Workshop: Social Media, Data Science, & Cartograpy
Alexander Dunkel, Madalina Gugulica
This is the second notebook in a series of four notebooks:
Open these notebooks through the file explorer on the left side.
Dunkel, A., Löchner, M., & Burghardt, D. (2020).
Privacy-aware visualization of volunteered geographic information (VGI) to analyze spatial activity:
A benchmark implementation. ISPRS International Journal of Geo-Information. DOI / PDF
set
(this is called cardinality estimation)Lets first see the regular approach of creating a set in python
and counting the unique items in the set:
Regular set approach in python
user1 = 'foo'
user2 = 'bar'
users = {user1, user2, user2, user2}
usercount = len(users)
print(usercount)
HLL approach
from python_hll.hll import HLL
import mmh3
user1_hash = mmh3.hash(user1)
user2_hash = mmh3.hash(user2)
hll = HLL(11, 5) # log2m=11, regwidth=5
hll.add_raw(user1_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
usercount = hll.cardinality()
print(usercount)
HLL has two modes of operations that increase accuracy for small sets
Because Explicit mode stores Hashes fully, it cannot provide any benefits for privacy, which is why it should be disabled.
Repeat the process above with explicit mode turned off:
hll = HLL(11, 5, 0, 1) # log2m=11, regwidth=5, explicit=off, sparse=auto)
hll.add_raw(user1_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
usercount = hll.cardinality()
print(usercount)
Union of two sets
At any point, we can update a hll set with new items
(which is why HLL works well in streaming contexts):
user3 = 'baz'
user3_hash = mmh3.hash(user3)
hll.add_raw(user3_hash)
usercount = hll.cardinality()
print(usercount)
.. but separate HLL sets may be created independently,
to be only merged finally for cardinality estimation:
hll_params = (11, 5, 0, 1)
hll1 = HLL(*hll_params)
hll2 = HLL(*hll_params)
hll3 = HLL(*hll_params)
hll1.add_raw(mmh3.hash('foo'))
hll2.add_raw(mmh3.hash('bar'))
hll3.add_raw(mmh3.hash('baz'))
hll1.union(hll2) # modifies hll1 to contain the union
hll1.union(hll3)
usercount = hll1.cardinality()
print(usercount)
A User Day refers to a common metric used in visual analytics.
Each user is counted once per day.
This is commonly done by concatentation of a unique user identifier and the unique day of activity, e.g.:
userdays_set = set()
userday_sample = "96117893@N05" + "2012-04-14"
userdays_set.add(userday_sample)
print(len(userdays_set))
> 1
We have create an example processing pipeline for counting user days world wide, using the Flickr YFCC100M dataset, which contains about 50 Million georeferenced photos uploaded by Flickr users with a Creative Commons License.
The full processing pipeline can be viewed in a separate collection of notebooks.
In the following, we will use the HLL data to replicate these visuals.
We'll use python methods stored and loaded from modules.
There's a difference between collecting and visualizing data.
During data collection, information can be stored with a higher
information granularity, to allow some flexibility for
tuning visualizations.
In the YFCC100M Example, we "collect" data at a GeoHash granularity of 5
(about 3 km "snapping distance" for coordinates).
During data visualization, these coordinates and HLL sets are aggregated
further to a worldwide grid of 100x100 km bins.
Have a look at the data structure at data collection time.
from pathlib import Path
OUTPUT = Path.cwd() / "out"
OUTPUT.mkdir(exist_ok=True)
%load_ext autoreload
%autoreload 2
import sys
module_path = str(Path.cwd().parents[0] / "py")
if module_path not in sys.path:
sys.path.append(module_path)
from modules import tools
Load the full benchmark dataset.
filename = "yfcc_latlng.csv"
yfcc_input_csv_path = OUTPUT / filename
if not yfcc_input_csv_path.exists():
sample_url = tools.get_sample_url()
yfcc_csv_url = f'{sample_url}/download?path=%2F&files={filename}'
tools.get_stream_file(url=yfcc_csv_url, path=yfcc_input_csv_path)
Load csv data to pandas dataframe.
%%time
import pandas as pd
dtypes = {'latitude': float, 'longitude': float}
df = pd.read_csv(
OUTPUT / "yfcc_latlng.csv", dtype=dtypes, encoding='utf-8')
print(len(df))
The dataset contains a total number of 451,949 distinct coordinates,
at a GeoHash precision of 5 (~2500 Meters snapping distance.)
df.head()
Calculate a single HLL cardinality (first row):
sample_hll_set = df.loc[0, "date_hll"]
from python_hll.util import NumberUtil
hex_string = sample_hll_set[2:]
print(sample_hll_set[2:])
hll = HLL.from_bytes(NumberUtil.from_hex(hex_string, 0, len(hex_string)))
hll.cardinality()
The two components of the structure are highlighted below.
tools.display_header_stats(
df.head(),
base_cols=["latitude", "longitude"],
metric_cols=["date_hll"])
The color refers to the two components:
from modules import yfcc
filename = "yfcc_all_est_benchmark.csv"
yfcc_benchmark_csv_path = OUTPUT / filename
if not yfcc_benchmark_csv_path.exists():
yfcc_csv_url = f'{sample_url}/download?path=%2F&files={filename}'
tools.get_stream_file(
url=yfcc_csv_url, path=yfcc_benchmark_csv_path)
grid = yfcc.grid_agg_fromcsv(
OUTPUT / filename,
columns=["xbin", "ybin", "userdays_hll"])
grid[grid["userdays_hll"].notna()].head()
tools.display_header_stats(
grid[grid["userdays_hll"].notna()].head(),
base_cols=["geometry"],
metric_cols=["userdays_hll"])
Calculate the cardinality for all bins and store in extra column:
def hll_from_byte(hll_set: str):
"""Return HLL set from binary representation"""
hex_string = hll_set[2:]
return HLL.from_bytes(
NumberUtil.from_hex(
hex_string, 0, len(hex_string)))
def cardinality_from_hll(hll_set):
"""Turn binary hll into HLL set and return cardinality"""
hll = hll_from_byte(hll_set)
return hll.cardinality() - 1
Calculate cardinality for all bins.
This process will take some time (about 3 Minutes),
due to using slow python-hll implementation.
%%time
mask = grid["userdays_hll"].notna()
grid["userdays_est"] = 0
grid.loc[mask, 'userdays_est'] = grid[mask].apply(
lambda x: cardinality_from_hll(
x["userdays_hll"]),
axis=1)
grid[mask].apply()
?grid["userdays_hll"].notna()
creates a list (a pd.Series
) of True/False valuesgrid.loc[mask, 'userdays_est']
uses the index of the mask, to select indexes, and the column 'userdays_est', to assign valuesFrom now on, disable warnings:
import warnings
warnings.filterwarnings('ignore')
Have a look at the cardinality below.
grid[grid["userdays_hll"].notna()].head()
Activate the bokeh holoviews extension.
from modules import grid as yfcc_grid
import holoviews as hv
hv.notebook_extension('bokeh')