Finally I've found some time to complete a working prototype of the new library pg_chamelion.
The github repos is here, https://github.com/the4thdoctor/pg_chameleon, please fork it if you want to debug or give me some feedback.
The library exports the metadata from mysql using sqlalchemy. The informations are used by the PostgreSQL library to rebuild the schema in a PostgreSQL database. Finally the data is dumped to multiple files in CSV format and reloaded into PostgreSQL using the copy_expert command.
The MySQL data is exported in the CSV format using a custom SQL query with the MySQL's (not standard) syntax for the string concatenation.
The replace function is also used to have the double quotes escaped.
The copy into PostgreSQL uses the psycopg2's copy_expert cursor's method, quick and very efficient.
In the future I'll change the config file format to YAML because is simpler to manage and to write.
I'm also planning to write an alternative library to get
the metadata from mysql.
I think in general using any ORM is a bad choice. The performance tuning of their horrible queries, when the amount of data becomes serious, is a pain in
IMHO their usage should be limited for toy sized databases or for quick and dirt proof of concept.
And if you think I'm rude then take a look to http://dbareactions.com/ .