Simplifying lake with 200,000 nodes

Where such “data reduction efforts” show improvements over time, well one place anyway, is in downloading “scrubbed” data from a planet file (or regionalized pieces of it). By “scrubbed” I mean only the latest data, the latest version, not all data including that version’s history. More often, “latest version of the data” versions of OSM data get used as “OSM data,” and many, many fewer versions of the entire “snail dragging along every history of its existence like a long tail behind it” exist. So, it’s a win in the long-run for OSM to have efficient data.

Especially “of this sort” (big pigs of rough edges, no offense to pigs). Many bloated data can be spun into silk purses and that’s what we’re talking about here.

Blow a bubble, make a silk purse, follow the breadcrumbs. The pieces are mostly there to do this, the specifics aren’t yet, but they are a relatively simple toolchain-sketch away from being so.

1 Like