dogsheep/github-to-sqlite |
0.6 |
2019-11-11 |
- New
releases command for fetching releases for a repo, #11
- Repository topics are now fetched by the
repos command
github-to-sqlite repos now accepts multiple usernames
- Command now works without
--auth file (using anonymous API calls), #9
|
2019-11-11T05:34:06Z |
simonw/datasette-render-markdown |
0.1a |
2019-11-09 |
First working release |
2019-11-09T23:49:58Z |
dogsheep/twitter-to-sqlite |
0.15 |
2019-11-09 |
- Import command no longer fails on empty files - #29
- Fixed bug in
followers command - #30
following table now has indexes - #28
|
2019-11-09T20:13:07Z |
simonw/yaml-to-sqlite |
0.2.1 |
2019-11-08 |
|
2019-11-08T06:46:02Z |
dogsheep/healthkit-to-sqlite |
0.4 |
2019-11-08 |
- Fixed workout latitude/longitude points import for iOS 13 - #10
|
2019-11-08T01:19:51Z |
simonw/sqlite-utils |
1.12.1 |
2019-11-07 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-12-1 |
2019-11-07T05:00:55Z |
simonw/sqlite-utils |
1.12 |
2019-11-07 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-12 |
2019-11-07T05:00:24Z |
dogsheep/twitter-to-sqlite |
0.14 |
2019-11-04 |
search command gained --since_id and --since options, for retrieving tweets since the last time the search was run
search command is now documented. Closes #3.
|
2019-11-04T05:33:56Z |
simonw/sqlite-transform |
0.3 |
2019-11-04 |
return is now optional for one-line lambdas, e.g.
sqlite-transform lambda my.db mytable mycolumn --code='str(value).upper()'
|
2019-11-04T04:39:06Z |
simonw/sqlite-transform |
0.2 |
2019-11-04 |
- Added
lambda command, which lets you specify a Python expression (or multiple lines of code) to be executed against every value in the column. Documentation here. (#2)
- Added a
parsedate command, which works like parsedatetime except it outputs just the date component. (#1)
|
2019-11-04T02:26:57Z |
simonw/sqlite-transform |
0.1 |
2019-11-04 |
- First release, supporting only the
parsedatetime command.
|
2019-11-04T02:19:37Z |
simonw/datasette |
0.30.2 |
2019-11-02 |
https://datasette.readthedocs.io/en/latest/changelog.html#v0-30-2 |
2019-11-02T23:33:13Z |
simonw/datasette-cluster-map |
0.7 |
2019-11-02 |
- Upgraded Leaflet to 1.5.1
- Upgraded leaflet.markercluster to 1.4.1
This fixes a bug where datasette-cluster-map and datasette-leaflet-geojson could not run within the same Datasette instance. |
2019-11-02T03:54:36Z |
simonw/datasette-leaflet-geojson |
0.4 |
2019-11-02 |
- Fixed bug where plugin fails to render a map if the cell value was truncated (#3)
- Fixed incompatibility when loaded in the same environment as
datasette-pretty-json (#6)
|
2019-11-02T01:48:09Z |
simonw/datasette |
0.30.1 |
2019-11-02 |
https://datasette.readthedocs.io/en/stable/changelog.html#v0-30-1 |
2019-11-02T00:06:02Z |
simonw/datasette |
0.30 |
2019-10-30 |
https://datasette.readthedocs.io/en/stable/changelog.html#v0-30 |
2019-10-30T18:51:30Z |
dogsheep/twitter-to-sqlite |
0.13 |
2019-10-30 |
- New
mentions-timeline command (#26)
|
2019-10-30T02:22:30Z |
simonw/datasette |
0.29.3 |
2019-10-18 |
https://datasette.readthedocs.io/en/stable/changelog.html#v0-29-3 |
2019-10-18T05:24:54Z |
dogsheep/twitter-to-sqlite |
0.12 |
2019-10-17 |
- The
source column for a tweet is now a foreign key to a new sources table - #12
- New migrations system to upgrade existing databases to handle the new
source column extraction - #23
- Experimental implementation of new
twitter-to-sqlite search tweets.db search-term command, which runs a search and saves the tweets from that search - #3
- Fixed bug where sometimes a user record for the authenticated user was not persisted to the
users table
|
2019-10-17T18:00:28Z |
dogsheep/twitter-to-sqlite |
0.11.1 |
2019-10-16 |
- Fix bugs running
home-timeline --since from scratch. If tables were missing, script would throw an error.
|
2019-10-16T22:31:51Z |
dogsheep/swarm-to-sqlite |
0.2 |
2019-10-16 |
- Added --since option, closes #3
|
2019-10-16T20:40:55Z |
dogsheep/twitter-to-sqlite |
0.11 |
2019-10-16 |
- Added
--since_id and --since to user-timeline command, refs #20
--since and --since_id options for home-timeline , closes #19
import command now works on files and directories, closes #22
|
2019-10-16T19:38:42Z |
dogsheep/twitter-to-sqlite |
0.10 |
2019-10-15 |
- favorites command now populates
favorited_by table - #14
- favorites
--stop_after option - #20
- Store unescaped
full_text of Tweet - #21
|
2019-10-15T18:56:09Z |
simonw/datasette-auth-github |
0.11 |
2019-10-14 |
- Subclasses can now customize the creation of the redirect cookie - #49 - thanks, @ananis25
|
2019-10-14T16:06:25Z |
simonw/datasette-leaflet-geojson |
0.3 |
2019-10-14 |
- Fixed bug displaying multiple polygon maps on a page - #4 - thanks, @chris48s
- Upgraded Leaflet to 1.5.1 - #5 - thanks, @chris48s
|
2019-10-14T15:18:48Z |
simonw/datasette-render-timestamps |
0.2 |
2019-10-14 |
|
2019-10-14T14:52:55Z |
simonw/datasette-render-timestamps |
0.1 |
2019-10-14 |
Initial release |
2019-10-14T14:51:38Z |
dogsheep/github-to-sqlite |
0.5 |
2019-10-13 |
- New command:
github-to-sqlite issue-comments for importing comments on issues - #7
github-to-sqlite issues now accepts optional --issue=1 argument
- Fixed bug inserting users into already-created table with wrong columns - #6
|
2019-10-13T05:30:05Z |
dogsheep/twitter-to-sqlite |
0.9 |
2019-10-11 |
- New
twitter-to-sqlite home-timeline command, for retrieving your timeline of tweets from people you follow - #18
twitter-to-sqlite import created tables now use the archive_ prefix instead of archive- , for easier querying
- Running
twitter-to-sqlite import now deletes existing archive_ tables and recreates them - #17
|
2019-10-11T16:57:25Z |
dogsheep/twitter-to-sqlite |
0.8 |
2019-10-11 |
- New
twitter-to-sqlite import twitter.db archive.zip command for importing data from a Twitter export file. #4 - documentation here.
|
2019-10-11T06:46:52Z |
simonw/datasette-auth-github |
0.10 |
2019-10-07 |
- New
cacheable_prefixes mechanism to avoid performance issues caused by adding cache-control: private to static assets - #47
|
2019-10-07T15:40:02Z |
dogsheep/pocket-to-sqlite |
0.1 |
2019-10-07 |
Initial release |
2019-10-07T05:18:20Z |
dogsheep/twitter-to-sqlite |
0.7 |
2019-10-07 |
- New
statuses-lookup command for bulk fetching tweets by their IDs - #13
|
2019-10-07T00:33:28Z |
dogsheep/twitter-to-sqlite |
0.6 |
2019-10-06 |
- New experimental
track and follow commands for subscribing to the Twitter real-time API #11. Documentation for track and follow.
- Documentation for
--sql and --attach , refs #8
|
2019-10-06T04:52:18Z |
dogsheep/genome-to-sqlite |
0.1 |
2019-09-19 |
First release |
2019-09-19T15:41:17Z |
simonw/datasette-atom |
0.1a |
2019-09-17 |
Initial work in progress |
2019-09-17T15:40:22Z |
dogsheep/github-to-sqlite |
0.4 |
2019-09-17 |
- Added
github-to-sqlite repos command, #3
|
2019-09-17T00:19:42Z |
dogsheep/github-to-sqlite |
0.3 |
2019-09-14 |
license is now extracted from the repos table into a separate licenses table with a foreign key, #2
|
2019-09-14T21:50:01Z |
dogsheep/github-to-sqlite |
0.2 |
2019-09-14 |
- Added the
github-to-sqlite starred command for retrieving starred repos, #1
|
2019-09-14T21:32:34Z |
dogsheep/github-to-sqlite |
0.1.1 |
2019-09-14 |
- Fix bug in authentication handling code
|
2019-09-14T19:42:08Z |
simonw/datasette-rure |
0.3 |
2019-09-11 |
- Documentation now links to interactive demos
- Now uses an LRU cache for compiled regular expressions, which can give a 10x speedup on queries #3
|
2019-09-11T23:00:32Z |
simonw/datasette-rure |
0.2 |
2019-09-11 |
- Added regexp_match() function, #1
- Added regexp_matches() function, #2
|
2019-09-11T03:25:22Z |
simonw/datasette-rure |
0.1 |
2019-09-10 |
First working version |
2019-09-10T18:17:42Z |
dogsheep/twitter-to-sqlite |
0.5 |
2019-09-10 |
- Added
followers-ids and friends-ids subcommands
|
2019-09-10T17:39:47Z |
dogsheep/twitter-to-sqlite |
0.4 |
2019-09-09 |
- New
users-lookup command for fetching multiple user profiles, including using new --sql and --attach options
- New
list-members subcommand for fetching members of a list
- Added
stop_after option to user-timeline command
|
2019-09-09T22:43:05Z |
dogsheep/twitter-to-sqlite |
0.3 |
2019-09-04 |
Extract places and media into separate tables
Demo: https://twitter-to-sqlite-demo.now.sh/ |
2019-09-04T22:11:01Z |
dogsheep/twitter-to-sqlite |
0.2 |
2019-09-04 |
Full text search for tweets table |
2019-09-04T22:09:46Z |
dogsheep/twitter-to-sqlite |
Alpha release |
2019-09-04 |
|
2019-09-04T22:08:18Z |
simonw/sqlite-utils |
1.11 |
2019-09-03 |
https://sqlite-utils.readthedocs.io/en/stable/changelog.html#v1-11 |
2019-09-03T01:03:27Z |
simonw/sqlite-utils |
1.10 |
2019-09-03 |
https://sqlite-utils.readthedocs.io/en/stable/changelog.html#v1-10 |
2019-09-03T00:46:27Z |
simonw/sqlite-utils |
1.9 |
2019-09-03 |
https://sqlite-utils.readthedocs.io/en/stable/changelog.html#v1-9 |
2019-09-03T00:46:02Z |
simonw/sqlite-utils |
1.8 |
2019-09-03 |
https://sqlite-utils.readthedocs.io/en/stable/changelog.html#v1-8 |
2019-09-03T00:45:42Z |
simonw/datasette |
0.29.2 |
2019-09-03 |
- Bumped Uvicorn to 0.8.4, fixing a bug where the querystring was not included in the server logs. (#559)
- Fixed bug where the navigation breadcrumbs were not displayed correctly on the page for a custom query. (#558)
- Fixed bug where custom query names containing unicode characters caused errors.
https://datasette.readthedocs.io/en/stable/changelog.html#v0-29-2 |
2019-09-03T00:33:35Z |
dogsheep/swarm-to-sqlite |
0.1 |
2019-08-31 |
First usable release. |
2019-08-31T02:58:32Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 1.0 |
2019-08-03 |
This release drops support for Python 2.x #55 |
2019-08-03T10:58:15Z |
simonw/db-to-sqlite |
1.0.2 |
2019-08-03 |
Fix for #18 - no longer throws error on empty tables |
2019-08-03T04:09:41Z |
simonw/sqlite-utils |
1.7.1 |
2019-07-28 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-7-1 |
2019-07-28T12:05:36Z |
simonw/sqlite-utils |
1.7 |
2019-07-28 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-7 |
2019-07-28T12:03:21Z |
dogsheep/healthkit-to-sqlite |
0.3.2 |
2019-07-26 |
Fix for #9 - Too many SQL variables bug |
2019-07-26T06:12:12Z |
dogsheep/healthkit-to-sqlite |
0.3.1 |
2019-07-24 |
Uses less RAM - see #7 |
2019-07-24T06:38:36Z |
dogsheep/healthkit-to-sqlite |
0.3 |
2019-07-22 |
- Tool now displays a progress bar during import - you can disable it with
--silent #5
- You can pass a path to a decompressed XML file instead of a zip file, using
--xml
- Records table is now broken up into different tables for each type of recorded data #6
|
2019-07-22T03:33:32Z |
dogsheep/healthkit-to-sqlite |
0.2 |
2019-07-20 |
Fixed a bug where duplicate records could crash the import. |
2019-07-20T16:44:41Z |
dogsheep/healthkit-to-sqlite |
Initial release |
2019-07-20 |
|
2019-07-20T16:43:09Z |
simonw/sqlite-utils |
1.6 |
2019-07-19 |
sqlite-utils insert can now accept TSV data via the new --tsv option (#41)
|
2019-07-19T05:36:48Z |
simonw/datasette |
0.29.1 |
2019-07-14 |
- Fixed bug with static mounts using relative paths which could lead to traversal exploits (#555) - thanks Abdussamet Kocak!
https://datasette.readthedocs.io/en/stable/changelog.html#v0-29-1 |
2019-07-14T01:43:44Z |
simonw/datasette-auth-github |
0.9.1 |
2019-07-14 |
- Updated documentation to reflect new one hour
cookie_ttl default - #43
|
2019-07-14T00:59:24Z |
simonw/datasette-auth-github |
0.9 |
2019-07-14 |
- Explicit log in screen now includes SVG GitHub logo on the button - #42
- Default signed cookie TTL is now 1 hour, not 24 hours - #43
|
2019-07-14T00:41:33Z |
simonw/datasette-auth-github |
0.8 |
2019-07-13 |
Now compatible with Python 3.5, which means it can run on Glitch! https://datasette-auth-github-demo.glitch.me/ #38
This also means we now have no installation dependencies, since the code now uses the standard library to make API calls instead of depending on http3. #40 |
2019-07-13T18:43:06Z |
simonw/datasette-auth-github |
0.7 |
2019-07-11 |
- New
require_auth configuration option. This defaults to True (reflecting existing behaviour) when datasette-auth-github is used as a Datasette plugin, but it defaults to False if you use the wrapper ASGI middleware class directly. #37
|
2019-07-11T15:07:15Z |
simonw/datasette-cors |
0.3 |
2019-07-11 |
Now with unit tests! #1 |
2019-07-11T04:43:24Z |
simonw/datasette-auth-github |
0.6.3 |
2019-07-08 |
Additional documentation on scope["auth"] when using as ASGI middleware. |
2019-07-08T16:51:39Z |
simonw/datasette-auth-github |
0.6.2 |
2019-07-08 |
Updated README for PyPI |
2019-07-08T03:47:38Z |
simonw/datasette |
Datasette 0.29 |
2019-07-08 |
ASGI, new plugin hooks, facet by date and much, much more… See the release notes for full details. |
2019-07-08T03:43:13Z |
simonw/datasette-auth-github |
0.6.1 |
2019-07-07 |
Minor code clean-up and updated one-line description for PyPI / README. |
2019-07-07T20:39:19Z |
simonw/datasette-auth-github |
0.6 |
2019-07-07 |
- Redirects back to where you were after you login, using a new
asgi_auth_redirect cookie - #26
- Unset asgi_auth_logout cookie when you sign in again - #28
- Fixed bug where API call to GitHub intermittently failed with
ConnectionResetError - #27
- More robust creation of derived cookie signing secret using
hashlib.pbkdf2_hmac
- HTML pages now served with
charset=UTF-8 - #30
|
2019-07-07T19:41:47Z |
simonw/datasette-auth-github |
0.5 |
2019-07-07 |
- New
allow_teams configuration option for restricting access to members of a GitHub team - #11
- Signed cookies expire after a TTL (customize with new
cooke_ttl setting) - #22
- Documentation on using this as ASGI middleware - #19
- Avoid 404 on
/-/auth-callback if user is logged in - #24
- Added
cookie_version setting for invalidating all cookies - #18
|
2019-07-07T02:37:16Z |
simonw/datasette-auth-github |
0.4 |
2019-07-06 |
- More compact JSON encoding for authentication cookie value
- Support single string values for
allow_users /allow_orgs options, #16
|
2019-07-06T22:03:41Z |
simonw/datasette-auth-github |
0.3.2 |
2019-07-06 |
- Fixed bug where custom template was not correctly included in the package #15
|
2019-07-06T22:01:45Z |
simonw/datasette-auth-github |
0.3.1 |
2019-07-06 |
- Fixed bug where we were requesting the incorrect OAuth scope when using
allow_orgs #14
|
2019-07-06T17:27:46Z |
simonw/datasette-auth-github |
0.3 |
2019-07-06 |
- Ability to restrict access to specific users or members of specific GitHub organizations #4
|
2019-07-06T17:15:29Z |
simonw/datasette-auth-github |
0.2 |
2019-07-06 |
/-/logout URL for logging out #7
- Custom navigation showing login state #5
- Restored ASGI lifespan support #10
disable_auto_login setting #9
Cache-Control: private #6
|
2019-07-06T17:14:02Z |
simonw/datasette-auth-github |
0.1.1 |
2019-07-05 |
Outbound calls to the GitHub API are now non-blocking (using http3) - #8 |
2019-07-05T16:00:07Z |
simonw/datasette-auth-github |
0.1 |
2019-07-05 |
Initial working release. |
2019-07-05T15:58:24Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.9.2 |
2019-07-03 |
Bumped dependencies and pinned pytest to version 4 (5 is incompatible with Python 2.7). |
2019-07-03T04:37:15Z |
simonw/db-to-sqlite |
1.0.1 |
2019-07-01 |
|
2019-07-01T04:09:04Z |
simonw/db-to-sqlite |
1.0 |
2019-07-01 |
See the README for full usage instructions.
- Instead of using
--connection the connection string is now a required positional argument, #14
--sql must now be accompanied by --output specifying the table the query results should be written to
--redact tablename columnname option can be used to redact values, #2
- Foreign keys are now created with indexes, use
--no-index-fks to disable this, #12
--table can now be used multiple times, #6
- README and
--help now include example connection strings
- README also details how this can be used with Heroku Postgres
|
2019-07-01T01:32:47Z |
simonw/db-to-sqlite |
0.8 |
2019-06-29 |
- Added
--progress option to show progress bars during import - #7
|
2019-06-29T21:53:58Z |
simonw/db-to-sqlite |
0.7 |
2019-06-29 |
- Support
pip install db-to-sqlite[postgresql] #4
- Documentation for both that and
pip install db-to-sqlite[mysql]
|
2019-06-29T21:31:00Z |
simonw/db-to-sqlite |
0.6 |
2019-06-29 |
--all can now add many foreign key relationships without a VACUUM between each one, #8
- Added unit tests against MySQL, refs #5
|
2019-06-29T15:27:18Z |
simonw/sqlite-utils |
1.3 |
2019-06-29 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-3 |
2019-06-29T06:39:32Z |
simonw/db-to-sqlite |
0.5 |
2019-06-26 |
- Foreign keys are now all added at the end, which means we can support circular foreign key references #1
- Dropped dependency on
toposort
- Added
--all --skip=table option for skipping one or more tables when running --all
|
2019-06-26T15:57:17Z |
simonw/db-to-sqlite |
0.4 |
2019-06-26 |
- Create
--all tables in toposort order
- Depend on sqlite-utils version 0.14 or higher
|
2019-06-26T15:55:54Z |
simonw/db-to-sqlite |
0.3 |
2019-06-26 |
Anchor to sqlite-utils==0.13 to pick up a breaking change. |
2019-06-26T15:54:56Z |
simonw/sqlite-utils |
1.2.2 |
2019-06-26 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-2-2 |
2019-06-26T04:24:33Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.9.1 |
2019-06-24 |
- Fixed bug where
-f option used FTS4 even when FTS5 was available (#41)
|
2019-06-24T15:21:12Z |
simonw/yaml-to-sqlite |
0.2 |
2019-06-23 |
Better README |
2019-06-23T22:55:50Z |
simonw/sqlite-utils |
1.2.1 |
2019-06-21 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-2-1 |
2019-06-21T00:06:29Z |
simonw/datasette-json-html |
0.5 - tooltips and demos |
2019-06-14 |
Links can now have tooltips (#2):
{
"href": "https://simonwillison.net/",
"label": "Simon Willison",
"title": "My blog"
}
Also added a live demo and linked to it throughout the README (#3, #1) |
2019-06-14T01:33:44Z |
simonw/datasette-render-binary |
0.3 |
2019-06-13 |
Now uses the filetype module to suggest a possible format.
<img width="600" alt="many-photos-tables__RKFaceCrop__58_rows" src="https://user-images.githubusercontent.com/9599/59449428-df85cf00-8dbb-11e9-8817-0fd09ff539c7.png"> |
2019-06-13T16:16:36Z |
simonw/datasette-render-binary |
0.2 |
2019-06-13 |
Added screenshot. |
2019-06-13T16:14:52Z |
simonw/sqlite-utils |
1.2 |
2019-06-13 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-2 |
2019-06-13T06:42:21Z |
simonw/datasette-render-binary |
0.1 |
2019-06-09 |
|
2019-06-09T16:10:36Z |
simonw/datasette-bplist |
0.1 |
2019-06-09 |
Initial release. |
2019-06-09T01:19:55Z |
simonw/sqlite-utils |
1.1 |
2019-05-29 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-1
- Support for
ignore=True / --ignore for ignoring inserted records if the primary key alread exists (#21)
- Ability to add a column that is a foreign key reference using
fk=... / --fk (#16)
|
2019-05-29T05:15:22Z |
simonw/sqlite-utils |
1.0.1 |
2019-05-28 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-0-1 |
2019-05-28T00:51:21Z |
simonw/sqlite-utils |
1.0 |
2019-05-25 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v1-0 |
2019-05-25T01:19:21Z |
simonw/datasette |
Datasette 0.28 |
2019-05-19 |
Datasette 0.28 - a salmagundi of new features!
* No longer immutable! Datasette now supports databases that change.
* Faceting improvements including facet-by-JSON-array and the ability to define custom faceting using plugins.
* datasette publish cloudrun lets you publish databasese to Google's new Cloud Run hosting service.
* New register_output_renderer plugin hook for adding custom output extensions to Datasette in addition to the default .json and .csv .
* Dozens of other smaller features and tweaks - see the release notes for full details. |
2019-05-19T21:42:28Z |
simonw/datasette-render-html |
|
2019-04-30 |
|
2019-04-30T01:59:32Z |
simonw/sqlite-utils |
0.14 |
2019-02-24 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-14 |
2019-02-24T23:15:16Z |
simonw/sqlite-utils |
0.13 |
2019-02-24 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-13 |
2019-02-24T07:00:14Z |
simonw/sqlite-utils |
0.12 |
2019-02-23 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-12 |
2019-02-23T02:31:29Z |
simonw/sqlite-utils |
0.11 |
2019-02-23 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-11 |
2019-02-23T02:15:34Z |
simonw/db-to-sqlite |
0.2 |
2019-02-08 |
--all option can now be used to duplicate an entire database, including detecting foreign key relationships.
--table option called without --sql will now mirror the specified table.
|
2019-02-08T06:07:36Z |
simonw/sqlite-utils |
0.10 |
2019-02-08 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-10 |
2019-02-08T05:19:33Z |
simonw/datasette |
Datasette 0.27 |
2019-02-06 |
https://datasette.readthedocs.io/en/stable/changelog.html#v0-27 |
2019-02-06T05:10:20Z |
simonw/sqlite-utils |
0.9 |
2019-01-29 |
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-9 |
2019-01-29T15:30:48Z |
simonw/sqlite-utils |
0.8 |
2019-01-28 |
Two new commands: sqlite-utils csv and sqlite-utils json
https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-8 |
2019-01-28T06:28:12Z |
simonw/datasette |
Datasette 0.26.1 |
2019-01-28 |
Release notes: https://datasette.readthedocs.io/en/stable/changelog.html#v0-26-1 |
2019-01-28T01:50:45Z |
simonw/sqlite-utils |
0.7 |
2019-01-25 |
Release notes are here: https://sqlite-utils.readthedocs.io/en/latest/changelog.html#v0-7 |
2019-01-25T07:27:57Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.9 |
2019-01-17 |
- Support for loading CSVs directly from URLs, thanks @betatim - #38
- New -pk/--primary-key options, closes #22
- Create FTS index for extracted column values
- Added --no-fulltext-fks option, closes #32
- Now using black for code formatting
- Bumped versions of dependencies
|
2019-01-17T05:20:23Z |
simonw/datasette |
Datasette 0.26 |
2019-01-10 |
Datasette 0.26 release notes |
2019-01-10T21:41:00Z |
simonw/datasette-json-html |
0.4.0 - <pre> support |
2019-01-02 |
You can use {"pre": "text"} to render text in a <pre> HTML tag:
{
"pre": "This\nhas\nnewlines"
}
Produces:
<pre>This
has
newlines</pre>
If the value attached to the "pre" key is itself a JSON object, that JSON will be pretty-printed:
{
"pre": {
"this": {
"object": ["is", "nested"]
}
}
}
Produces:
<pre>{
"this": {
"object": [
"is",
"nested"
]
}
}</pre>
|
2019-01-02T04:17:15Z |
simonw/datasette |
Datasette 0.25.2 |
2018-12-16 |
|
2018-12-16T21:45:39Z |
simonw/datasette |
Datasette 0.25.1 |
2018-12-16 |
Documentation improvements plus a fix for publishing to Zeit Now.
datasette publish now now uses Zeit’s v1 platform, to work around the new 100MB image limit. Thanks, @slygent - closes #366.
|
2018-12-16T21:44:27Z |
simonw/datasette |
Datasette 0.25 |
2018-09-19 |
New plugin hooks, improved database view support and an easier way to use more recent versions of SQLite.
See full release notes here: https://datasette.readthedocs.io/en/latest/changelog.html#v0-25 |
2018-09-19T18:27:21Z |
simonw/datasette |
Datasette 0.24 |
2018-07-24 |
See full release notes here: http://datasette.readthedocs.io/en/latest/changelog.html#v0-24 |
2018-07-24T16:51:29Z |
simonw/datasette-vega |
datasette-vega 0.6.1 |
2018-07-10 |
Tooltips #10 now also include the size and color column values, if those options have been selected. |
2018-07-10T03:44:51Z |
simonw/datasette |
Datasette 0.23.2 |
2018-07-08 |
Minor bugfix and documentation release.
- CSV export now respects
--cors , fixes #326
- Installation instructions including docker image - closes #328
- Fix for row pages for tables with / in, closes #325
|
2018-07-08T05:41:38Z |
simonw/datasette-vega |
datasette-vega 0.6 |
2018-07-07 |
Tooltips! #10
Cache-busting filename for CSS and JS, so new versions won't fail to load due to browser caching. #11 |
2018-07-07T01:20:53Z |
simonw/datasette-vega |
datasette-vega 0.5 |
2018-07-06 |
Datasette Vega now preserves graph settings across multiple loads of variants of the same page - for example, clicking column headers to re-order the data or applying suggested facets.
On the SQL page it will also persist graph settings across edits to the SQL query. #12 |
2018-07-06T03:34:28Z |
simonw/datasette |
Datasette 0.23.1 |
2018-06-21 |
Minor bugfix release.
- Correctly display empty strings in HTML table, closes #314
- Allow “.” in database filenames, closes #302
- 404s ending in slash redirect to remove that slash, closes #309
- Fixed incorrect display of compound primary keys with foreign key references. Closes #319
- Docs + example of canned SQL query using || concatenation. Closes #321
- Correctly display facets with value of 0 - closes #318
- Default ‘expand labels’ to checked in CSV advanced export
|
2018-06-21T16:02:44Z |
simonw/datasette |
Datasette 0.23: CSV, SpatiaLite and more |
2018-06-18 |
This release features CSV export, improved options for foreign key expansions, new configuration settings and improved support for SpatiaLite.
See full release notes here: http://datasette.readthedocs.io/en/latest/changelog.html#v0-23 |
2018-06-18T15:28:37Z |
simonw/datasette |
Datasette 0.22.1 |
2018-05-23 |
Bugfix release, plus we now use versioneer for our version numbers.
- Faceting no longer breaks pagination, fixes #282
- Add
__version_info__ derived from __version__ [Robert Gieseke]
This might be tuple of more than two values (major and minor
version) if commits have been made after a release.
- Add version number support with Versioneer. [Robert Gieseke]
Versioneer Licence:
Public Domain (CC0-1.0)
Closes #273
- Refactor inspect logic [Russ Garrett] |
2018-05-23T14:04:17Z |
simonw/datasette |
Datasette 0.22: Datasette Facets |
2018-05-20 |
The big new feature in this release is facets. Datasette can now apply faceted browse to any column in any table. It will also suggest possible facets. See the Datasette Facets announcement post for more details.
In addition to the work on facets:
Removed the --page_size= argument to datasette serve in favour of:
datasette serve --config default_page_size:50 mydb.db
Added new help section:
$ datasette --help-config
Config options:
default_page_size Default page size for the table view
(default=100)
max_returned_rows Maximum rows that can be returned from a table
or custom query (default=1000)
sql_time_limit_ms Time limit for a SQL query in milliseconds
(default=1000)
default_facet_size Number of values to return for requested facets
(default=30)
facet_time_limit_ms Time limit for calculating a requested facet
(default=200)
facet_suggest_time_limit_ms Time limit for calculating a suggested facet
(default=50)
- Only apply responsive table styles to
.rows-and-column
Otherwise they interfere with tables in the description, e.g. on
https://fivethirtyeight.datasettes.com/fivethirtyeight/nba-elo%2Fnbaallelo
|
2018-05-20T23:44:19Z |
simonw/datasette |
Datasette 0.21: New _shape=, new _size=, search within columns |
2018-05-05 |
New JSON _shape= options, the ability to set table _size= and a mechanism for searching within specific columns.
-
Default tests to using a longer timelimit
Every now and then a test will fail in Travis CI on Python 3.5 because it hit the default 20ms SQL time limit.
Test fixtures now default to a 200ms time limit, and we only use the 20ms time limit for the specific test that tests query interruption.
This should make our tests on Python 3.5 in Travis much more stable.
-
Support _search_COLUMN=text searches, closes #237
-
Show version on /-/plugins page, closes #248
-
?_size=max option, closes #249
-
Added /-/versions and /-/versions.json , closes #244
Sample output:
{
"python": {
"version": "3.6.3",
"full": "3.6.3 (default, Oct 4 2017, 06:09:38) \n[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]"
},
"datasette": {
"version": "0.20"
},
"sqlite": {
"version": "3.23.1",
"extensions": {
"json1": null,
"spatialite": "4.3.0a"
}
}
}
-
Renamed ?_sql_time_limit_ms= to ?_timelimit , closes #242
-
New ?_shape=array option + tweaks to _shape , closes #245
- Default is now
?_shape=arrays (renamed from lists )
- New
?_shape=array returns an array of objects as the root object
- Changed
?_shape=object to return the object as the root
- Updated docs
-
FTS tables now detected by inspect() , closes #240
-
New ?_size=XXX querystring parameter for table view, closes #229
Also added documentation for all of the _special arguments.
Plus deleted some duplicate logic implementing _group_count .
-
If max_returned_rows==page_size , increment max_returned_rows - fixes #230
-
New hidden: True option for table metadata, closes #239
-
Hide idx_* tables if spatialite detected, closes #228
-
Added class=rows-and-columns to custom query results table
-
Added CSS class rows-and-columns to main table
-
label_column option in metadata.json - closes #234
|
2018-05-05T23:21:33Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.8 |
2018-04-24 |
-d and -df options for specifying date/datetime columns, closes #33
- Maintain lookup tables in SQLite, refs #17
--index option to specify which columns to index, closes #24
- Test confirming
--shape and --filename-column and -c work together #25
- Use usecols when loading CSV if shape specified
--filename-column is now compatible with --shape , closes #10
--no-index-fks option
By default, csvs-to-sqlite creates an index for every foreign key column that is
added using the --extract-column option.
For large tables, this can dramatically increase the size of the resulting
database file on disk. The new --no-index-fks option allows you to disable
this feature to save on file size.
Refs #24 which will allow you to explicitly list which columns SHOULD have
an index created.
- Added --filename-column option, refs #10
- Fixes for Python 2, refs #25
- Implemented new --shape option - refs #25
- --table option for specifying table to write to, refs #10
- Updated README to cover --skip-errors , refs #20
- Add --skip-errors option (#20) [Jani Monoses]
- Less verbosity (#19) [Jani Monoses]
Only log extract_columns info when that option is passed.
- Add option for field quoting behaviour (#15) [Jani Monoses] |
2018-04-24T15:35:30Z |
simonw/datasette |
Datasette 0.20: static assets and templates for plugins |
2018-04-20 |
Mostly new work on the Plugins mechanism: plugins can now bundle static assets and custom templates, and datasette publish has a new --install=name-of-plugin option.
- Add col-X classes to HTML table on custom query page
- Fixed out-dated template in documentation
- Plugins can now bundle custom templates, #224
- Added /-/metadata /-/plugins /-/inspect, #225
- Documentation for --install option, refs #223
- Datasette publish/package --install option, #223
- Fix for plugins in Python 3.5, #222
- New plugin hooks: extra_css_urls() and extra_js_urls(), #214
- /-/static-plugins/PLUGIN_NAME/ now serves static/ from plugins
- <th> now gets class="col-X" - plus added col-X documentation
- Use to_css_class for table cell column classes
This ensures that columns with spaces in the name will still
generate usable CSS class names. Refs #209
- Add column name classes to <td>s, make PK bold [Russ Garrett]
- Don't duplicate simple primary keys in the link column [Russ Garrett]
When there's a simple (single-column) primary key, it looks weird to
duplicate it in the link column.
This change removes the second PK column and treats the link column as
if it were the PK column from a header/sorting perspective.
- Correct escaping for HTML display of row links [Russ Garrett]
- Longer time limit for test_paginate_compound_keys
It was failing intermittently in Travis - see #209
- Use application/octet-stream for downloadable databses
- Updated PyPI classifiers
- Updated PyPI link to pypi.org |
2018-04-20T14:41:14Z |
simonw/datasette |
Datasette 0.19: plugins preview |
2018-04-17 |
This is the first preview of the new Datasette plugins mechanism. Only two plugin hooks are available so far - for custom SQL functions and custom template filters. There's plenty more to come - read the documentation and get involved in the tracking ticket if you have feedback on the direction so far.
-
Fix for _sort_desc=sortable_with_nulls test, refs #216
-
Fixed #216 - paginate correctly when sorting by nullable column
-
Initial documentation for plugins, closes #213
https://datasette.readthedocs.io/en/latest/plugins.html
- New
--plugins-dir=plugins/ option (#212)
New option causing Datasette to load and evaluate all of the Python files in the specified directory and register any plugins that are defined in those files.
This new option is available for the following commands:
datasette serve mydb.db --plugins-dir=plugins/
datasette publish now/heroku mydb.db --plugins-dir=plugins/
datasette package mydb.db --plugins-dir=plugins/
- Start of the plugin system, based on pluggy (#210)
Uses https://pluggy.readthedocs.io/ originally created for the py.test project
We're starting with two plugin hooks:
prepare_connection(conn)
This is called when a new SQLite connection is created. It can be used to register custom SQL functions.
prepare_jinja2_environment(env)
This is called with the Jinja2 environment. It can be used to register custom template tags and filters.
An example plugin which uses these two hooks can be found at https://github.com/simonw/datasette-plugin-demos or installed using pip install datasette-plugin-demos
Refs #14
- Return HTTP 405 on InvalidUsage rather than 500. [Russ Garrett]
This also stops it filling up the logs. This happens for HEAD requests at the moment - which perhaps should be handled better, but that's a different issue. |
2018-04-17T02:21:51Z |
simonw/datasette |
Datasette 0.18: units |
2018-04-14 |
This release introduces support for units, contributed by Russ Garrett (#203). You can now optionally specify the units for specific columns using metadata.json . Once specified, units will be displayed in the HTML view of your table. They also become available for use in filters - if a column is configured with a unit of distance, you can request all rows where that column is less than 50 meters or more than 20 feet for example.
-
Link foreign keys which don't have labels. [Russ Garrett]
This renders unlabeled FKs as simple links.
Also includes bonus fixes for two minor issues:
- In foreign key link hrefs the primary key was escaped using HTML escaping rather than URL escaping. This broke some non-integer PKs.
- Print tracebacks to console when handling 500 errors.
-
Fix SQLite error when loading rows with no incoming FKs. [Russ Garrett]
This fixes ERROR: conn=<sqlite3.Connection object at 0x10bbb9f10>, sql = 'select ', params = {'id': '1'} caused by an invalid query when loading incoming FKs.
The error was ignored due to async but it still got printed to the console.
-
Allow custom units to be registered with Pint. [Russ Garrett]
-
Support units in filters. [Russ Garrett]
-
Tidy up units support. [Russ Garrett]
- Add units to exported JSON
- Units key in metadata skeleton
- Docs
-
Initial units support. [Russ Garrett]
Add support for specifying units for a column in metadata.json and rendering them on display using pint
|
2018-04-14T15:45:11Z |
simonw/datasette |
Datasette 0.16: sort on mobile, better error handling |
2018-04-13 |
-
Better mechanism for handling errors; 404s for missing table/database
New error mechanism closes #193
404s for missing tables/databases closes #184
-
long_description in markdown for the new PyPI
-
Hide Spatialite system tables. [Russ Garrett]
-
Allow explain select / explain query plan select #201
-
Datasette inspect now finds primary_keys #195
-
Ability to sort using form fields (for mobile portrait mode) #199
We now display sort options as a select box plus a descending checkbox, which means you can apply sort orders even in portrait mode on a mobile phone where the column headers are hidden.
|
2018-04-13T21:10:53Z |
simonw/datasette |
Datasette 0.15: sort by column |
2018-04-09 |
The biggest new feature in this release is the ability to sort by column. On the table page the column headers can now be clicked to apply sort (or descending sort), or you can specify ?_sort=column or ?_sort_desc=column directly in the URL.
You can try this feature out on this fivethirtyeight data about the ages of different US politicians.
-
table_rows => table_rows_count , filtered_table_rows => filtered_table_rows_count
Renamed properties. Closes #194
-
New sortable_columns option in metadata.json to control sort options.
You can now explicitly set which columns in a table can be used for sorting using the _sort and _sort_desc arguments using metadata.json :
{
"databases": {
"database1": {
"tables": {
"example_table": {
"sortable_columns": [
"height",
"weight"
]
}
}
}
}
}
Refs #189
-
Column headers now link to sort/desc sort - refs #189
-
_sort and _sort_desc parameters for table views
Allows for paginated sorted results based on a specified column.
Refs #189
-
Total row count now correct even if _next applied
-
Use .custom_sql() for _group_count implementation (refs #150)
-
Make HTML title more readable in query template (#180) [Ryan Pitts]
-
New ?_shape=objects/object/lists param for JSON API (#192)
New _shape= parameter replacing old .jsono extension
Now instead of this:
/database/table.jsono
We use the _shape parameter like this:
/database/table.json?_shape=objects
Also introduced a new _shape called object which looks like this:
/database/table.json?_shape=object
Returning an object for the rows key:
...
"rows": {
"pk1": {
...
},
"pk2": {
...
}
}
Refs #122
-
Utility for writing test database fixtures to a .db file
python tests/fixtures.py /tmp/hello.db
This is useful for making a SQLite database of the test fixtures for interactive exploration.
-
Compound primary key _next= now plays well with extra filters
Closes #190
-
Fixed bug with keyset pagination over compound primary keys
Refs #190
-
Database/Table views inherit source/license/source_url/license_url metadata
If you set the source_url/license_url/source/license fields in your root metadata those values will now be inherited all the way down to the database and table templates.
The title/description are NOT inherited.
Also added unit tests for the HTML generated by the metadata.
Refs #185
-
Add metadata, if it exists, to heroku temp dir (#178) [Tony Hirst]
-
Initial documentation for pagination
-
Broke up test_app into test_api and test_html
-
Fixed bug with .json path regular expression
I had a table called geojson and it caused an exception because the regex was matching .json and not \.json
-
Deploy to Heroku with Python 3.6.3
|
2018-04-09T15:55:29Z |
simonw/datasette |
Datasette 0.14: customization edition |
2017-12-10 |
The theme of this release is customization: Datasette now allows every aspect of its presentation to be customized either using additional CSS or by providing entirely new templates.
Datasette's metadata.json format has also been expanded, to allow per-database and per-table metadata. A new datasette skeleton command can be used to generate a skeleton JSON file ready to be filled in with per-database and per-table details.
The metadata.json file can also be used to define canned queries, as a more powerful alternative to SQL views.
-
extra_css_urls /extra_js_urls in metadata
A mechanism in the metadata.json format for adding custom CSS and JS urls.
Create a metadata.json file that looks like this:
{
"extra_css_urls": [
"https://simonwillison.net/static/css/all.bf8cd891642c.css"
],
"extra_js_urls": [
"https://code.jquery.com/jquery-3.2.1.slim.min.js"
]
}
Then start datasette like this:
datasette mydb.db --metadata=metadata.json
The CSS and JavaScript files will be linked in the <head> of every page.
You can also specify a SRI (subresource integrity hash) for these assets:
{
"extra_css_urls": [
{
"url": "https://simonwillison.net/static/css/all.bf8cd891642c.css",
"sri": "sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI"
}
],
"extra_js_urls": [
{
"url": "https://code.jquery.com/jquery-3.2.1.slim.min.js",
"sri": "sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g="
}
]
}
Modern browsers will only execute the stylesheet or JavaScript if the SRI hash matches the content served. You can generate hashes using https://www.srihash.org/
-
Auto-link column values that look like URLs (#153)
-
CSS styling hooks as classes on the body (#153)
Every template now gets CSS classes in the body designed to support custom styling.
The index template (the top level page at / ) gets this:
<body class="index">
The database template (/dbname/ ) gets this:
<body class="db db-dbname">
The table template (/dbname/tablename ) gets:
<body class="table db-dbname table-tablename">
The row template (/dbname/tablename/rowid ) gets:
<body class="row db-dbname table-tablename">
The db-x and table-x classes use the database or table names themselves IF they are valid CSS identifiers. If they aren't, we strip any invalid characters out and append a 6 character md5 digest of the original name, in order to ensure that multiple tables which resolve to the same stripped character version still have different CSS classes.
Some examples (extracted from the unit tests):
"simple" => "simple"
"MixedCase" => "MixedCase"
"-no-leading-hyphens" => "no-leading-hyphens-65bea6"
"_no-leading-underscores" => "no-leading-underscores-b921bc"
"no spaces" => "no-spaces-7088d7"
"-" => "336d5e"
"no $ characters" => "no--characters-59e024"
-
datasette --template-dir=mytemplates/ argument
You can now pass an additional argument specifying a directory to look for custom templates in.
Datasette will fall back on the default templates if a template is not found in that directory.
-
Ability to over-ride templates for individual tables/databases.
It is now possible to over-ride templates on a per-database / per-row or per-table basis.
When you access e.g. /mydatabase/mytable Datasette will look for the following:
- table-mydatabase-mytable.html
- table.html
If you provided a --template-dir argument to datasette serve it will look in that directory first.
The lookup rules are as follows:
Index page (/):
index.html
Database page (/mydatabase):
database-mydatabase.html
database.html
Table page (/mydatabase/mytable):
table-mydatabase-mytable.html
table.html
Row page (/mydatabase/mytable/id):
row-mydatabase-mytable.html
row.html
If a table name has spaces or other unexpected characters in it, the template filename will follow the same rules as our custom <body> CSS classes - for example, a table called "Food Trucks" will attempt to load the following templates:
table-mydatabase-Food-Trucks-399138.html
table.html
It is possible to extend the default templates using Jinja template inheritance. If you want to customize EVERY row template with some additional content you can do so by creating a row.html template like this:
{% extends "default:row.html" %}
{% block content %}
<h1>EXTRA HTML AT THE TOP OF THE CONTENT BLOCK</h1>
<p>This line renders the original block:</p>
{{ super() }}
{% endblock %}
-
--static option for datasette serve (#160)
You can now tell Datasette to serve static files from a specific location at a specific mountpoint.
For example:
datasette serve mydb.db --static extra-css:/tmp/static/css
Now if you visit this URL:
http://localhost:8001/extra-css/blah.css
The following file will be served:
/tmp/static/css/blah.css
-
Canned query support.
Named canned queries can now be defined in metadata.json like this:
{
"databases": {
"timezones": {
"queries": {
"timezone_for_point": "select tzid from timezones ..."
}
}
}
}
These will be shown in a new "Queries" section beneath "Views" on the database page.
-
New datasette skeleton command for generating metadata.json (#164)
-
metadata.json support for per-table/per-database metadata (#165)
Also added support for descriptions and HTML descriptions.
Here's an example metadata.json file illustrating custom per-database and per-table metadata:
{
"title": "Overall datasette title",
"description_html": "This is a <em>description with HTML</em>.",
"databases": {
"db1": {
"title": "First database",
"description": "This is a string description & has no HTML",
"license_url": "http://example.com/",
"license": "The example license",
"queries": {
"canned_query": "select * from table1 limit 3;"
},
"tables": {
"table1": {
"title": "Custom title for table1",
"description": "Tables can have descriptions too",
"source": "This has a custom source",
"source_url": "http://example.com/"
}
}
}
}
}
-
Renamed datasette build command to datasette inspect (#130)
-
Upgrade to Sanic 0.7.0 (#168)
https://github.com/channelcat/sanic/releases/tag/0.7.0
-
Package and publish commands now accept --static and --template-dir
Example usage:
datasette package --static css:extra-css/ --static js:extra-js/ \
sf-trees.db --template-dir templates/ --tag sf-trees --branch master
This creates a local Docker image that includes copies of the templates/, extra-css/ and extra-js/ directories. You can then run it like this:
docker run -p 8001:8001 sf-trees
For publishing to Zeit now:
datasette publish now --static css:extra-css/ --static js:extra-js/ \
sf-trees.db --template-dir templates/ --name sf-trees --branch master
-
HTML comment showing which templates were considered for a page (#171)
|
2017-12-10T01:41:14Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.7 |
2017-11-26 |
- Add -s option to specify input field separator (#13) [Jani Monoses]
|
2017-11-26T03:14:11Z |
simonw/datasette |
Datasette 0.13: foreign key, search and filters |
2017-11-25 |
0.13 (2017-11-24)
-
Search now applies to current filters.
Combined search into the same form as filters.
Closes #133
-
Much tidier design for table view header.
Closes #147
-
Added ?column__not=blah filter.
Closes #148
-
Row page now resolves foreign keys.
Closes #132
-
Further tweaks to select/input filter styling.
Refs #86 - thanks for the help, @natbat!
-
Show linked foreign key in table cells.
-
Added UI for editing table filters.
Refs #86
-
Hide FTS-created tables on index pages.
Closes #129
-
Add publish to heroku support [Jacob Kaplan-Moss]
datasette publish heroku mydb.db
Pull request #104
-
Initial implementation of ?_group_count=column .
URL shortcut for counting rows grouped by one or more columns.
?_group_count=column1&_group_count=column2 works as well.
SQL generated looks like this:
select "qSpecies", count(*) as "count"
from Street_Tree_List
group by "qSpecies"
order by "count" desc limit 100
Or for two columns like this:
select "qSpecies", "qSiteInfo", count(*) as "count"
from Street_Tree_List
group by "qSpecies", "qSiteInfo"
order by "count" desc limit 100
Refs #44
-
Added --build=master option to datasette publish and package.
The datasette publish and datasette package commands both now
accept an optional --build argument. If provided, this can be used
to specify a branch published to GitHub that should be built into
the container.
This makes it easier to test code that has not yet been officially
released to PyPI, e.g.:
datasette publish now mydb.db --branch=master
-
Implemented ?_search=XXX + UI if a FTS table is detected.
Closes #131
-
Added datasette --version support.
-
Table views now show expanded foreign key references, if possible.
If a table has foreign key columns, and those foreign key tables
have label_columns , the TableView will now query those other
tables for the corresponding values and display those values as
links in the corresponding table cells.
label_columns are currently detected by the inspect() function,
which looks for any table that has just two columns - an ID column
and one other - and sets the label_column to be that second non-ID
column.
-
Don't prevent tabbing to "Run SQL" button (#117) [Robert
Gieseke]
See comment in #115
-
Add keyboard shortcut to execute SQL query (#115) [Robert
Gieseke]
-
Allow --load-extension to be set via environment variable.
-
Add support for ?field__isnull=1 (#107) [Ray N]
-
Add spatialite, switch to debian and local build (#114) [Ariel
Núñez]
-
Added --load-extension argument to datasette serve.
Allows loading of SQLite extensions. Refs #110.
|
2017-11-25T03:44:46Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.6.1 |
2017-11-25 |
-f and -c now work for single table multiple columns.
Fixes #12 |
2017-11-25T02:58:25Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.6 |
2017-11-24 |
SQLite full-text search support
- Added
--fts option for setting up SQLite full-text search.
The --fts option will create a corresponding SQLite FTS virtual table, using
the best available version of the FTS module.
https://sqlite.org/fts5.html
https://www.sqlite.org/fts3.html
Usage:
csvs-to-sqlite my-csv.csv output.db -f column1 -f column2
Example generated with this option: https://sf-trees-search.now.sh/
Example search: https://sf-trees-search.now.sh/sf-trees-search-a899b92?sql=select+*+from+Street_Tree_List+where+rowid+in+%28select+rowid+from+Street_Tree_List_fts+where+Street_Tree_List_fts+match+%27grove+london+dpw%27%29%0D%0A
Will be used in https://github.com/simonw/datasette/issues/131
- --fts and --extract-column now cooperate.
If you extract a column and then specify that same column in the --fts list,
csvs-to-sqlite now uses the original value of that column in the index.
Example using CSV from https://data.sfgov.org/City-Infrastructure/Street-Tree-List/tkzw-k3nq
csvs-to-sqlite Street_Tree_List.csv trees-fts.db \
-c qLegalStatus -c qSpecies -c qSiteInfo \
-c PlantType -c qCaretaker -c qCareAssistant \
-f qLegalStatus -f qSpecies -f qAddress \
-f qSiteInfo -f PlantType -f qCaretaker \
-f qCareAssistant -f PermitNotes
Closes #9
- Handle column names with spaces in them.
- Added csvs-to-sqlite --version option.
Using http://click.pocoo.org/5/api/#click.version_option |
2017-11-24T23:16:45Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.5 |
2017-11-19 |
Now handles columns with integers and nulls in correctly
Pandas does a good job of figuring out which SQLite column types should be
used for a DataFrame - with one exception: due to a limitation of NumPy it
treats columns containing a mixture of integers and NaN (blank values) as
being of type float64, which means they end up as REAL columns in SQLite.
http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-na
To fix this, we now check to see if a float64 column actually consists solely
of NaN and integer-valued floats (checked using v.is_integer() in Python). If
that is the case, we over-ride the column type to be INTEGER instead.
See #5 - also a8ab524 and 0997b7b |
2017-11-19T05:53:25Z |
simonw/csvs-to-sqlite |
csvs-to-sqlite 0.3 |
2017-11-17 |
- Mechanism for converting columns into separate tables
Let's say you have a CSV file that looks like this:
county,precinct,office,district,party,candidate,votes
Clark,1,President,,REP,John R. Kasich,5
Clark,2,President,,REP,John R. Kasich,0
Clark,3,President,,REP,John R. Kasich,7
(Real example from https://github.com/openelections/openelections-data-sd/blob/ master/2016/20160607__sd__primary__clark__precinct.csv )
You can now convert selected columns into separate lookup tables using the new
--extract-column option (shortname: -c) - for example:
csvs-to-sqlite openelections-data-*/*.csv \
-c county:County:name \
-c precinct:Precinct:name \
-c office -c district -c party -c candidate \
openelections.db
The format is as follows:
column_name:optional_table_name:optional_table_value_column_name
If you just specify the column name e.g. -c office , the following table will
be created:
CREATE TABLE "party" (
"id" INTEGER PRIMARY KEY,
"value" TEXT
);
If you specify all three options, e.g. -c precinct:Precinct:name the table
will look like this:
CREATE TABLE "Precinct" (
"id" INTEGER PRIMARY KEY,
"name" TEXT
);
The original tables will be created like this:
CREATE TABLE "ca__primary__san_francisco__precinct" (
"county" INTEGER,
"precinct" INTEGER,
"office" INTEGER,
"district" INTEGER,
"party" INTEGER,
"candidate" INTEGER,
"votes" INTEGER,
FOREIGN KEY (county) REFERENCES County(id),
FOREIGN KEY (party) REFERENCES party(id),
FOREIGN KEY (precinct) REFERENCES Precinct(id),
FOREIGN KEY (office) REFERENCES office(id),
FOREIGN KEY (candidate) REFERENCES candidate(id)
);
They will be populated with IDs that reference the new derived tables.
Closes #2 |
2017-11-17T05:33:39Z |
simonw/datasette |
Datasette 0.12 |
2017-11-16 |
- Added
__version__ , now displayed as tooltip in page footer
(#108).
- Added initial docs, including a changelog (#99).
- Turned on auto-escaping in Jinja.
-
Added a UI for editing named parameters (#96).
You can now construct a custom SQL statement using SQLite named parameters (e.g. :name ) and datasette will display form fields for editing those parameters. Here's an example which lets you see the most popular names for dogs of different species registered through various dog registration schemes in Australia.
- Pin to specific Jinja version. (#100).
- Default to 127.0.0.1 not 0.0.0.0. (#98).
- Added extra metadata options to publish and package commands.
(#92).
You can now run these commands like so:
datasette now publish mydb.db \
--title="My Title" \
--source="Source" \
--source_url="http://www.example.com/" \
--license="CC0" \
--license_url="https://creativecommons.org/publicdomain/zero/1.0/"
This will write those values into the metadata.json that is packaged
with the app. If you also pass --metadata=metadata.json that file
will be updated with the extra values before being written into the
Docker image.
-
Added simple production-ready Dockerfile (#94) [Andrew Cutler]
- New
?_sql_time_limit_ms=10 argument to database and table page
(#95)
- SQL syntax highlighting with Codemirror (#89) [Tom Dyson]
|
2017-11-16T16:01:35Z |