On Wed, Jun 08, 2011 at 12:58:17AM +0200, Beni Buess wrote:
I'm now working on something like a public transport API, which would make it possible to get data consolidated by the rules of the new public transport feature, but with respect to the situation, that there are thousands of stations with the "old" scheme. See some simple stats on this [1].
Sorry to jump on this, but I hope 'consolidation' doesn't mean to mass retag things. I think this 'new' schema allows the 'old' tags to co-exist, so just leave them be.
Personally, I prefer the 'old' schema because the 'new' proposal is about as simple and readable as a IEEE standard (or, if you prefer these things: the German tax laws). I mean: two nodes and one relation for a simple one way bus stop? Why? In 95% of cases the 'old' schema will do the job just fine and your map proves that.
The API is not yet open for public use since I'm still experimenting with the DB scheme best suited. I've tried the scheme used by osm2pgsql, since it would then be easy to also render the data with mapnik, but it is not really suited for complex querys like all nodes/ways in the same stop_area as the given one. maybe i have to get some data redundant in the DB, just to speed things up and still be able to feed mapnik from it. But that would make the upgrade process more complicated, any ideas?
Maybe you should have a look how I do the hiking map [1]. The philosophy is basically to derive specialised database schemata instead of using the one-size-fits-all version. This way you get tables that are suitable for both rendering and fast online queries.
I solved that by deriving table from a osmosis planet database. (There is the beginning of a framework [2] but really just the beginning.) The advantage is that it has full updating support.
Another library which has the same philosophy is imposm[3]. It imports specialised schemata directly from a osm-file. It might be a bit more mature, but there is no update-support.
I'm going to try setting up a XAPI Switzerland using the new JXAPI code, doesn't seem to be too hard (at least I hope so).
Cool.
The challenge is to maintain an up-to-date Swiss extract in the osmosis Postgresql DB.
Do you think that would be a problem, is the available software (osmosis) not capable of doing this in a more or less automatic way without to much fiddling? Any hints?
I would create an osmosis psql DB from the Swiss extract and then use the planetary minute update to keep it up-to-date. After each update the DB needs to be cropped to Swiss size. This step actually needs some thinking but it should be possible. (If you want to do it really well, you should write a special osmosis task that takes a polygon as input and crops the data already while updating the DB.)
Gruss
Sarah
[1] http://dev.lonvia.de/trac/wiki/HikingMap [2] http://dev.lonvia.de/trac/wiki/OsgendeFramework [3] http://imposm.org/docs/imposm/latest/