{"id": "settings:setting-max-csv-mb", "page": "settings", "ref": "setting-max-csv-mb", "title": "max_csv_mb", "content": "The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB.\n You can disable the limit entirely by settings this to 0: \n datasette mydatabase.db --setting max_csv_mb 0", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-max-insert-rows", "page": "settings", "ref": "setting-max-insert-rows", "title": "max_insert_rows", "content": "Maximum rows that can be inserted at a time using the bulk insert API, see Inserting rows . Defaults to 100. \n You can increase or decrease this limit like so: \n datasette mydatabase.db --setting max_insert_rows 1000", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-max-returned-rows", "page": "settings", "ref": "setting-max-returned-rows", "title": "max_returned_rows", "content": "Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows. \n You can increase or decrease this limit like so: \n datasette mydatabase.db --setting max_returned_rows 2000", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-max-signed-tokens-ttl", "page": "settings", "ref": "setting-max-signed-tokens-ttl", "title": "max_signed_tokens_ttl", "content": "Maximum allowed expiry time for signed API tokens created by users. \n Defaults to 0 which means no limit - tokens can be created that will never expire. \n Set this to a value in seconds to limit the maximum expiry time. For example, to set that limit to 24 hours you would use: \n datasette mydatabase.db --setting max_signed_tokens_ttl 86400 \n This setting is enforced when incoming tokens are processed.", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-publish-secrets", "page": "settings", "ref": "setting-publish-secrets", "title": "Using secrets with datasette publish", "content": "The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed. \n This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy. \n You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option: \n datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "settings:setting-secret", "page": "settings", "ref": "setting-secret", "title": "Configuring the secret", "content": "Datasette uses a secret string to sign secure values such as cookies. \n If you do not provide a secret, Datasette will create one when it starts up. This secret will reset every time the Datasette server restarts though, so things like authentication cookies and API tokens will not stay valid between restarts. \n You can pass a secret to Datasette in two ways: with the --secret command-line option or by setting a DATASETTE_SECRET environment variable. \n datasette mydb.db --secret=SECRET_VALUE_HERE \n Or: \n export DATASETTE_SECRET=SECRET_VALUE_HERE\ndatasette mydb.db \n One way to generate a secure random secret is to use Python like this: \n python3 -c 'import secrets; print(secrets.token_hex(32))'\ncdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52 \n Plugin authors make use of this signing mechanism in their plugins using .sign(value, namespace=\"default\") and .unsign(value, namespace=\"default\") .", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "settings:setting-sql-time-limit-ms", "page": "settings", "ref": "setting-sql-time-limit-ms", "title": "sql_time_limit_ms", "content": "By default, queries have a time limit of one second. If a query takes longer than this to run Datasette will terminate the query and return an error. \n If this time limit is too short for you, you can customize it using the sql_time_limit_ms limit - for example, to increase it to 3.5 seconds: \n datasette mydatabase.db --setting sql_time_limit_ms 3500 \n You can optionally set a lower time limit for an individual query using the ?_timelimit=100 query string argument: \n /my-database/my-table?qSpecies=44&_timelimit=100 \n This would set the time limit to 100ms for that specific query. This feature is useful if you are working with databases of unknown size and complexity - a query that might make perfect sense for a smaller table could take too long to execute on a table with millions of rows. By setting custom time limits you can execute queries \"optimistically\" - e.g. give me an exact count of rows matching this query but only if it takes less than 100ms to calculate.", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-suggest-facets", "page": "settings", "ref": "setting-suggest-facets", "title": "suggest_facets", "content": "Should Datasette calculate suggested facets? On by default, turn this off like so: \n datasette mydatabase.db --setting suggest_facets off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-truncate-cells-html", "page": "settings", "ref": "setting-truncate-cells-html", "title": "truncate_cells_html", "content": "In the HTML table view, truncate any strings that are longer than this value.\n The full value will still be available in CSV, JSON and on the individual row\n HTML page. Set this to 0 to disable truncation. \n datasette mydatabase.db --setting truncate_cells_html 0", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:using-setting", "page": "settings", "ref": "using-setting", "title": "Using --setting", "content": "Datasette supports a number of settings. These can be set using the --setting name value option to datasette serve . \n You can set multiple settings at once like this: \n datasette mydatabase.db \\\n --setting default_page_size 50 \\\n --setting sql_time_limit_ms 3500 \\\n --setting max_returned_rows 2000 \n Settings can also be specified in the database.yaml configuration file .", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "spatialite:installing-spatialite-on-linux", "page": "spatialite", "ref": "installing-spatialite-on-linux", "title": "Installing SpatiaLite on Linux", "content": "SpatiaLite is packaged for most Linux distributions. \n apt install spatialite-bin libsqlite3-mod-spatialite \n Depending on your distribution, you should be able to run Datasette something like this: \n datasette --load-extension=/usr/lib/x86_64-linux-gnu/mod_spatialite.so \n If you are unsure of the location of the module, try running locate mod_spatialite and see what comes back.", "breadcrumbs": "[\"SpatiaLite\", \"Installation\"]", "references": "[]"} {"id": "spatialite:querying-polygons-using-within", "page": "spatialite", "ref": "querying-polygons-using-within", "title": "Querying polygons using within()", "content": "The within() SQL function can be used to check if a point is within a geometry: \n select\n name\nfrom\n places\nwhere\n within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom); \n The GeomFromText() function takes a string of well-known text. Note that the order used here is longitude then latitude . \n To run that same within() query in a way that benefits from the spatial index, use the following: \n select\n name\nfrom\n places\nwhere\n within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom)\n and rowid in (\n SELECT pkid FROM idx_places_geom\n where xmin < -3.1724366\n and xmax > -3.1724366\n and ymin < 51.4704448\n and ymax > 51.4704448\n );", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[]"} {"id": "spatialite:spatial-indexing-latitude-longitude-columns", "page": "spatialite", "ref": "spatial-indexing-latitude-longitude-columns", "title": "Spatial indexing latitude/longitude columns", "content": "Here's a recipe for taking a table with existing latitude and longitude columns, adding a SpatiaLite POINT geometry column to that table, populating the new column and then populating a spatial index: \n import sqlite3\n\nconn = sqlite3.connect(\"museums.db\")\n# Lead the spatialite extension:\nconn.enable_load_extension(True)\nconn.load_extension(\"/usr/local/lib/mod_spatialite.dylib\")\n# Initialize spatial metadata for this database:\nconn.execute(\"select InitSpatialMetadata(1)\")\n# Add a geometry column called point_geom to our museums table:\nconn.execute(\n \"SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);\"\n)\n# Now update that geometry column with the lat/lon points\nconn.execute(\n \"\"\"\n UPDATE museums SET\n point_geom = GeomFromText('POINT('||\"longitude\"||' '||\"latitude\"||')',4326);\n\"\"\"\n)\n# Now add a spatial index to that column\nconn.execute(\n 'select CreateSpatialIndex(\"museums\", \"point_geom\");'\n)\n# If you don't commit your changes will not be persisted:\nconn.commit()\nconn.close()", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[]"} {"id": "spatialite:spatialite-installation", "page": "spatialite", "ref": "spatialite-installation", "title": "Installation", "content": "", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[]"} {"id": "sql_queries:canned-queries-json-api", "page": "sql_queries", "ref": "canned-queries-json-api", "title": "JSON API for writable canned queries", "content": "Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. \n To submit JSON to a writable canned query, encode key/value parameters as a JSON document: \n POST /mydatabase/add_message\n\n{\"message\": \"Message goes here\"} \n You can also continue to submit data using regular form encoding, like so: \n POST /mydatabase/add_message\n\nmessage=Message+goes+here \n There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. \n \n \n Set an Accept: application/json header on your request \n \n \n Include ?_json=1 in the URL that you POST to \n \n \n Include \"_json\": 1 in your JSON body, or &_json=1 in your form encoded body \n \n \n The JSON response will look like this: \n {\n \"ok\": true,\n \"message\": \"Query executed, 1 row affected\",\n \"redirect\": \"/data/add_name\"\n} \n The \"message\" and \"redirect\" values here will take into account on_success_message , on_success_message_sql , on_success_redirect , on_error_message and on_error_redirect , if they have been set.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\"]", "references": "[]"} {"id": "sql_queries:canned-queries-magic-parameters", "page": "sql_queries", "ref": "canned-queries-magic-parameters", "title": "Magic parameters", "content": "Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string. \n These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query. \n Available magic parameters are: \n \n \n _actor_* - e.g. _actor_id , _actor_name \n \n Fields from the currently authenticated Actors . \n \n \n \n _header_* - e.g. _header_user_agent \n \n Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language . \n \n \n \n _cookie_* - e.g. _cookie_lang \n \n The value of the incoming cookie of that name. \n \n \n \n _now_epoch \n \n The number of seconds since the Unix epoch. \n \n \n \n _now_date_utc \n \n The date in UTC, e.g. 2020-06-01 \n \n \n \n _now_datetime_utc \n \n The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z \n \n \n \n _random_chars_* - e.g. _random_chars_128 \n \n A random string of characters of the specified length. \n \n \n \n Here's an example configuration that adds a message from the authenticated user, storing various pieces of additional metadata using magic parameters: \n [[[cog\nconfig_example(cog, \"\"\"\ndatabases:\n mydatabase:\n queries:\n add_message:\n allow:\n id: \"*\"\n sql: |-\n INSERT INTO messages (\n user_id, message, datetime\n ) VALUES (\n :_actor_id, :message, :_now_datetime_utc\n )\n write: true\n\"\"\") \n ]]] \n [[[end]]] \n The form presented at /mydatabase/add_message will have just a field for message - the other parameters will be populated by the magic parameter mechanism. \n Additional custom magic parameters can be added by plugins using the register_magic_parameters(datasette) hook.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\"]", "references": "[]"} {"id": "sql_queries:canned-queries-options", "page": "sql_queries", "ref": "canned-queries-options", "title": "Additional canned query options", "content": "Additional options can be specified for canned queries in the YAML or JSON configuration.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\"]", "references": "[]"} {"id": "sql_queries:canned-queries-writable", "page": "sql_queries", "ref": "canned-queries-writable", "title": "Writable canned queries", "content": "Canned queries by default are read-only. You can use the \"write\": true key to indicate that a canned query can write to the database. \n See Access to specific canned queries for details on how to add permission checks to canned queries, using the \"allow\" key. \n [[[cog\nconfig_example(cog, {\n \"databases\": {\n \"mydatabase\": {\n \"queries\": {\n \"add_name\": {\n \"sql\": \"INSERT INTO names (name) VALUES (:name)\",\n \"write\": True\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n This configuration will create a page at /mydatabase/add_name displaying a form with a name field. Submitting that form will execute the configured INSERT query. \n You can customize how Datasette represents success and errors using the following optional properties: \n \n \n on_success_message - the message shown when a query is successful \n \n \n on_success_message_sql - alternative to on_success_message : a SQL query that should be executed to generate the message \n \n \n on_success_redirect - the path or URL the user is redirected to on success \n \n \n on_error_message - the message shown when a query throws an error \n \n \n on_error_redirect - the path or URL the user is redirected to on error \n \n \n For example: \n [[[cog\nconfig_example(cog, {\n \"databases\": {\n \"mydatabase\": {\n \"queries\": {\n \"add_name\": {\n \"sql\": \"INSERT INTO names (name) VALUES (:name)\",\n \"params\": [\"name\"],\n \"write\": True,\n \"on_success_message_sql\": \"select 'Name inserted: ' || :name\",\n \"on_success_redirect\": \"/mydatabase/names\",\n \"on_error_message\": \"Name insert failed\",\n \"on_error_redirect\": \"/mydatabase\",\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n You can use \"params\" to explicitly list the named parameters that should be displayed as form fields - otherwise they will be automatically detected. \"params\" is not necessary in the above example, since without it \"name\" would be automatically detected from the query. \n You can pre-populate form fields when the page first loads using a query string, e.g. /mydatabase/add_name?name=Prepopulated . The user will have to submit the form to execute the query. \n If you specify a query in \"on_success_message_sql\" , that query will be executed after the main query. The first column of the first row return by that query will be displayed as a success message. Named parameters from the main query will be made available to the success message query as well.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\"]", "references": "[]"} {"id": "sql_queries:hide-sql", "page": "sql_queries", "ref": "hide-sql", "title": "hide_sql", "content": "Canned queries default to displaying their SQL query at the top of the page. If the query is extremely long you may want to hide it by default, with a \"show\" link that can be used to make it visible. \n Add the \"hide_sql\": true option to hide the SQL query by default.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\", \"Additional canned query options\"]", "references": "[]"} {"id": "sql_queries:id1", "page": "sql_queries", "ref": "id1", "title": "Canned queries", "content": "As an alternative to adding views to your database, you can define canned queries inside your datasette.yaml file. Here's an example: \n [[[cog\nfrom metadata_doc import config_example, config_example\nconfig_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"queries\": {\n \"just_species\": {\n \"sql\": \"select qSpecies from Street_Tree_List\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Then run Datasette like this: \n datasette sf-trees.db -m metadata.json \n Each canned query will be listed on the database index page, and will also get its own URL at: \n /database-name/canned-query-name \n For the above example, that URL would be: \n /sf-trees/just_species \n You can optionally include \"title\" and \"description\" keys to show a title and description on the canned query page. As with regular table metadata you can alternatively specify \"description_html\" to have your description rendered as HTML (rather than having HTML special characters escaped).", "breadcrumbs": "[\"Running SQL queries\"]", "references": "[]"} {"id": "sql_queries:id2", "page": "sql_queries", "ref": "id2", "title": "Pagination", "content": "Datasette's default table pagination is designed to be extremely efficient. SQL OFFSET/LIMIT pagination can have a significant performance penalty once you get into multiple thousands of rows, as each page still requires the database to scan through every preceding row to find the correct offset. \n When paginating through tables, Datasette instead orders the rows in the table by their primary key and performs a WHERE clause against the last seen primary key for the previous page. For example: \n select rowid, * from Tree_List where rowid > 200 order by rowid limit 101 \n This represents page three for this particular table, with a page size of 100. \n Note that we request 101 items in the limit clause rather than 100. This allows us to detect if we are on the last page of the results: if the query returns less than 101 rows we know we have reached the end of the pagination set. Datasette will only return the first 100 rows - the 101st is used purely to detect if there should be another page. \n Since the where clause acts against the index on the primary key, the query is extremely fast even for records that are a long way into the overall pagination set.", "breadcrumbs": "[\"Running SQL queries\"]", "references": "[]"} {"id": "sql_queries:sql", "page": "sql_queries", "ref": "sql", "title": "Running SQL queries", "content": "Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks. \n The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click \"View and edit SQL\" to open that query in the custom SQL editor. \n Note that this interface is only available if the execute-sql permission is allowed. See Controlling the ability to execute arbitrary SQL . \n Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button. \n You can also retrieve the results of any query as JSON by adding .json to the base URL.", "breadcrumbs": "[]", "references": "[]"} {"id": "testing_plugins:testing-plugins-datasette-test-instance", "page": "testing_plugins", "ref": "testing-plugins-datasette-test-instance", "title": "Setting up a Datasette test instance", "content": "The above example shows the easiest way to start writing tests against a Datasette instance: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200 \n Creating a Datasette() instance like this as useful shortcut in tests, but there is one detail you need to be aware of. It's important to ensure that the async method .invoke_startup() is called on that instance. You can do that like this: \n datasette = Datasette(memory=True)\nawait datasette.invoke_startup() \n This method registers any startup(datasette) or prepare_jinja2_environment(env, datasette) plugins that might themselves need to make async calls. \n If you are using await datasette.client.get() and similar methods then you don't need to worry about this - Datasette automatically calls invoke_startup() the first time it handles a request.", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"} {"id": "testing_plugins:testing-plugins-pdb", "page": "testing_plugins", "ref": "testing-plugins-pdb", "title": "Using pdb for errors thrown inside Datasette", "content": "If an exception occurs within Datasette itself during a test, the response returned to your plugin will have a response.status_code value of 500. \n You can add pdb=True to the Datasette constructor to drop into a Python debugger session inside your test run instead of getting back a 500 response code. This is equivalent to running the datasette command-line tool with the --pdb option. \n Here's what that looks like in a test function: \n def test_that_opens_the_debugger_or_errors():\n ds = Datasette([db_path], pdb=True)\n response = await ds.client.get(\"/\") \n If you use this pattern you will need to run pytest with the -s option to avoid capturing stdin/stdout in order to interact with the debugger prompt.", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"} {"id": "testing_plugins:testing-plugins-register-in-test", "page": "testing_plugins", "ref": "testing-plugins-register-in-test", "title": "Registering a plugin for the duration of a test", "content": "When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this: \n from datasette import hookimpl\nfrom datasette.app import Datasette\nfrom datasette.plugins import pm\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_using_test_plugin():\n class TestPlugin:\n __name__ = \"TestPlugin\"\n\n # Use hookimpl and method names to register hooks\n @hookimpl\n def register_routes(self):\n return [\n (r\"^/error$\", lambda: 1 / 0),\n ]\n\n pm.register(TestPlugin(), name=\"undo\")\n try:\n # The test implementation goes here\n datasette = Datasette()\n response = await datasette.client.get(\"/error\")\n assert response.status_code == 500\n finally:\n pm.unregister(name=\"undo\") \n To reuse the same temporary plugin in multiple tests, you can register it inside a fixture in your conftest.py file like this: \n from datasette import hookimpl\nfrom datasette.app import Datasette\nfrom datasette.plugins import pm\nimport pytest\nimport pytest_asyncio\n\n\n@pytest_asyncio.fixture\nasync def datasette_with_plugin():\n class TestPlugin:\n __name__ = \"TestPlugin\"\n\n @hookimpl\n def register_routes(self):\n return [\n (r\"^/error$\", lambda: 1 / 0),\n ]\n\n pm.register(TestPlugin(), name=\"undo\")\n try:\n yield Datasette()\n finally:\n pm.unregister(name=\"undo\")\n \n Note the yield statement here - this ensures that the finally: block that unregisters the plugin is executed only after the test function itself has completed. \n Then in a test: \n @pytest.mark.asyncio\nasync def test_error(datasette_with_plugin):\n response = await datasette_with_plugin.client.get(\"/error\")\n assert response.status_code == 500", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"} {"id": "writing_plugins:writing-plugins-building-urls", "page": "writing_plugins", "ref": "writing-plugins-building-urls", "title": "Building URLs within plugins", "content": "Plugins that define their own custom user interface elements may need to link to other pages within Datasette. \n This can be a bit tricky if the Datasette instance is using the base_url configuration setting to run behind a proxy, since that can cause Datasette's URLs to include an additional prefix. \n The datasette.urls object provides internal methods for correctly generating URLs to different pages within Datasette, taking any base_url configuration into account. \n This object is exposed in templates as the urls variable, which can be used like this: \n Back to the Homepage \n See datasette.urls for full details on this object.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[]"} {"id": "writing_plugins:writing-plugins-configuration", "page": "writing_plugins", "ref": "writing-plugins-configuration", "title": "Writing plugins that accept configuration", "content": "When you are writing plugins, you can access plugin configuration like this using the datasette plugin_config() method. If you know you need plugin configuration for a specific table, you can access it like this: \n plugin_config = datasette.plugin_config(\n \"datasette-cluster-map\", database=\"sf-trees\", table=\"Street_Tree_List\"\n) \n This will return the {\"latitude_column\": \"lat\", \"longitude_column\": \"lng\"} in the above example. \n If there is no configuration for that plugin, the method will return None . \n If it cannot find the requested configuration at the table layer, it will fall back to the database layer and then the root layer. For example, a user may have set the plugin configuration option inside datasette.yaml like so: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"plugins\": {\n \"datasette-cluster-map\": {\n \"latitude_column\": \"xlat\",\n \"longitude_column\": \"xlng\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n In this case, the above code would return that configuration for ANY table within the sf-trees database. \n The plugin configuration could also be set at the top level of datasette.yaml : \n [[[cog\nmetadata_example(cog, {\n \"plugins\": {\n \"datasette-cluster-map\": {\n \"latitude_column\": \"xlat\",\n \"longitude_column\": \"xlng\"\n }\n }\n}) \n ]]] \n [[[end]]] \n Now that datasette-cluster-map plugin configuration will apply to every table in every database.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[]"} {"id": "writing_plugins:writing-plugins-tracing", "page": "writing_plugins", "ref": "writing-plugins-tracing", "title": "Tracing plugin hooks", "content": "The DATASETTE_TRACE_PLUGINS environment variable turns on detailed tracing showing exactly which hooks are being run. This can be useful for understanding how Datasette is using your plugin. \n DATASETTE_TRACE_PLUGINS=1 datasette mydb.db \n Example output: \n actor_from_request:\n{ 'datasette': ,\n 'request': }\nHook implementations:\n[ >,\n >,\n >]\nResults:\n[{'id': 'root'}]", "breadcrumbs": "[\"Writing plugins\"]", "references": "[]"} {"id": "installation:installing-plugins", "page": "installation", "ref": "installing-plugins", "title": "Installing plugins", "content": "If you want to install plugins into your local Datasette Docker image you can do\n so using the following recipe. This will install the plugins and then save a\n brand new local image called datasette-with-plugins : \n docker run datasetteproject/datasette \\\n pip install datasette-vega\n\ndocker commit $(docker ps -lq) datasette-with-plugins \n You can now run the new custom image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasette-with-plugins \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n You can confirm that the plugins are installed by visiting\n http://127.0.0.1:8001/-/plugins \n Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container: \n docker run datasette-057a0 bash -c '\n apt-get update &&\n apt-get install ripgrep &&\n pip install datasette-ripgrep'\n\ndocker commit $(docker ps -lq) datasette-with-ripgrep", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using Docker\"]", "references": "[{\"href\": \"http://127.0.0.1:8001/-/plugins\", \"label\": \"http://127.0.0.1:8001/-/plugins\"}, {\"href\": \"https://datasette.io/plugins/datasette-ripgrep\", \"label\": \"datasette-ripgrep\"}]"} {"id": "installation:loading-spatialite", "page": "installation", "ref": "loading-spatialite", "title": "Loading SpatiaLite", "content": "The datasetteproject/datasette image includes a recent version of the\n SpatiaLite extension for SQLite. To load and enable that\n module, use the following command: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \\\n --load-extension=spatialite \n You can confirm that SpatiaLite is successfully loaded by visiting\n http://127.0.0.1:8001/-/versions", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using Docker\"]", "references": "[{\"href\": \"http://127.0.0.1:8001/-/versions\", \"label\": \"http://127.0.0.1:8001/-/versions\"}]"} {"id": "plugin_hooks:plugin-hook-prepare-jinja2-environment", "page": "plugin_hooks", "ref": "plugin-hook-prepare-jinja2-environment", "title": "prepare_jinja2_environment(env, datasette)", "content": "env - jinja2 Environment \n \n The template environment that is being prepared \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) \n \n \n \n This hook is called with the Jinja2 environment that is used to evaluate\n Datasette HTML templates. You can use it to do things like register custom\n template filters , for\n example: \n from datasette import hookimpl\n\n\n@hookimpl\ndef prepare_jinja2_environment(env):\n env.filters[\"uppercase\"] = lambda u: u.upper() \n You can now use this filter in your custom templates like so: \n Table name: {{ table|uppercase }} \n This function can return an awaitable function if it needs to run any async code. \n Examples: datasette-edit-templates", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"http://jinja.pocoo.org/docs/2.10/api/#custom-filters\", \"label\": \"register custom\\n template filters\"}, {\"href\": \"https://datasette.io/plugins/datasette-edit-templates\", \"label\": \"datasette-edit-templates\"}]"} {"id": "getting_started:getting-started-your-computer", "page": "getting_started", "ref": "getting-started-your-computer", "title": "Using Datasette on your own computer", "content": "First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command: \n datasette path/to/database.db \n This will start a web server on port 8001 - visit http://localhost:8001/ \n to access the web interface. \n Add -o to open your browser automatically once Datasette has started: \n datasette path/to/database.db -o \n Use Chrome on OS X? You can run datasette against your browser history\n like so: \n datasette ~/Library/Application\\ Support/Google/Chrome/Default/History --nolock \n The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode. \n Now visiting http://localhost:8001/History/downloads will show you a web\n interface to browse your downloads data: \n \n \n \n http://localhost:8001/History/downloads.json will return that data as\n JSON: \n {\n \"database\": \"History\",\n \"columns\": [\n \"id\",\n \"current_path\",\n \"target_path\",\n \"start_time\",\n \"received_bytes\",\n \"total_bytes\",\n ...\n ],\n \"rows\": [\n [\n 1,\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n 13097290269022132,\n 626688,\n 0,\n ...\n ]\n ]\n} \n http://localhost:8001/History/downloads.json?_shape=objects will return that data as\n JSON in a more convenient format: \n {\n ...\n \"rows\": [\n {\n \"start_time\": 13097290269022132,\n \"interrupt_reason\": 0,\n \"hash\": \"\",\n \"id\": 1,\n \"site_url\": \"\",\n \"referrer\": \"https://www.dropbox.com/downloading?src=index\",\n ...\n }\n ]\n}", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"http://localhost:8001/\", \"label\": \"http://localhost:8001/\"}, {\"href\": \"http://localhost:8001/History/downloads\", \"label\": \"http://localhost:8001/History/downloads\"}, {\"href\": \"http://localhost:8001/History/downloads.json\", \"label\": \"http://localhost:8001/History/downloads.json\"}, {\"href\": \"http://localhost:8001/History/downloads.json?_shape=objects\", \"label\": \"http://localhost:8001/History/downloads.json?_shape=objects\"}]"} {"id": "writing_plugins:writing-plugins-one-off", "page": "writing_plugins", "ref": "writing-plugins-one-off", "title": "Writing one-off plugins", "content": "The quickest way to start writing a plugin is to create a my_plugin.py file and drop it into your plugins/ directory. Here is an example plugin, which adds a new custom SQL function called hello_world() which takes no arguments and returns the string Hello world! . \n from datasette import hookimpl\n\n\n@hookimpl\ndef prepare_connection(conn):\n conn.create_function(\n \"hello_world\", 0, lambda: \"Hello world!\"\n ) \n If you save this in plugins/my_plugin.py you can then start Datasette like this: \n datasette serve mydb.db --plugins-dir=plugins/ \n Now you can navigate to http://localhost:8001/mydb and run this SQL: \n select hello_world(); \n To see the output of your plugin.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[{\"href\": \"http://localhost:8001/mydb\", \"label\": \"http://localhost:8001/mydb\"}]"} {"id": "changelog:id88", "page": "changelog", "ref": "id88", "title": "0.27 (2019-01-31)", "content": "New command: datasette plugins ( documentation ) shows you the currently installed list of plugins. \n \n \n Datasette can now output newline-delimited JSON using the new ?_shape=array&_nl=on query string option. \n \n \n Added documentation on The Datasette Ecosystem . \n \n \n Now using Python 3.7.2 as the base for the official Datasette Docker image .", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"http://ndjson.org/\", \"label\": \"newline-delimited JSON\"}, {\"href\": \"https://hub.docker.com/r/datasetteproject/datasette/\", \"label\": \"Datasette Docker image\"}]"} {"id": "changelog:id85", "page": "changelog", "ref": "id85", "title": "0.28 (2019-05-19)", "content": "A salmagundi of new features!", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://adamj.eu/tech/2019/01/18/a-salmagundi-of-django-alpha-announcements/\", \"label\": \"salmagundi\"}]"} {"id": "changelog:asgi", "page": "changelog", "ref": "asgi", "title": "ASGI", "content": "ASGI is the Asynchronous Server Gateway Interface standard. I've been wanting to convert Datasette into an ASGI application for over a year - Port Datasette to ASGI #272 tracks thirteen months of intermittent development - but with Datasette 0.29 the change is finally released. This also means Datasette now runs on top of Uvicorn and no longer depends on Sanic . \n I wrote about the significance of this change in Porting Datasette to ASGI, and Turtles all the way down . \n The most exciting consequence of this change is that Datasette plugins can now take advantage of the ASGI standard.", "breadcrumbs": "[\"Changelog\", \"0.29 (2019-07-07)\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/\", \"label\": \"ASGI\"}, {\"href\": \"https://github.com/simonw/datasette/issues/272\", \"label\": \"Port Datasette to ASGI #272\"}, {\"href\": \"https://www.uvicorn.org/\", \"label\": \"Uvicorn\"}, {\"href\": \"https://github.com/huge-success/sanic\", \"label\": \"Sanic\"}, {\"href\": \"https://simonwillison.net/2019/Jun/23/datasette-asgi/\", \"label\": \"Porting Datasette to ASGI, and Turtles all the way down\"}]"} {"id": "plugin_hooks:plugin-asgi-wrapper", "page": "plugin_hooks", "ref": "plugin-asgi-wrapper", "title": "asgi_wrapper(datasette)", "content": "Return an ASGI middleware wrapper function that will be applied to the Datasette ASGI application. \n This is a very powerful hook. You can use it to manipulate the entire Datasette response, or even to configure new URL routes that will be handled by your own custom code. \n You can write your ASGI code directly against the low-level specification, or you can use the middleware utilities provided by an ASGI framework such as Starlette . \n This example plugin adds a x-databases HTTP header listing the currently attached databases: \n from datasette import hookimpl\nfrom functools import wraps\n\n\n@hookimpl\ndef asgi_wrapper(datasette):\n def wrap_with_databases_header(app):\n @wraps(app)\n async def add_x_databases_header(\n scope, receive, send\n ):\n async def wrapped_send(event):\n if event[\"type\"] == \"http.response.start\":\n original_headers = (\n event.get(\"headers\") or []\n )\n event = {\n \"type\": event[\"type\"],\n \"status\": event[\"status\"],\n \"headers\": original_headers\n + [\n [\n b\"x-databases\",\n \", \".join(\n datasette.databases.keys()\n ).encode(\"utf-8\"),\n ]\n ],\n }\n await send(event)\n\n await app(scope, receive, wrapped_send)\n\n return add_x_databases_header\n\n return wrap_with_databases_header \n Examples: datasette-cors , datasette-pyinstrument , datasette-total-page-time", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/\", \"label\": \"ASGI\"}, {\"href\": \"https://www.starlette.io/middleware/\", \"label\": \"Starlette\"}, {\"href\": \"https://datasette.io/plugins/datasette-cors\", \"label\": \"datasette-cors\"}, {\"href\": \"https://datasette.io/plugins/datasette-pyinstrument\", \"label\": \"datasette-pyinstrument\"}, {\"href\": \"https://datasette.io/plugins/datasette-total-page-time\", \"label\": \"datasette-total-page-time\"}]"} {"id": "internals:internals-request", "page": "internals", "ref": "internals-request", "title": "Request object", "content": "The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties: \n \n \n .scope - dictionary \n \n The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification. \n \n \n \n .method - string \n \n The HTTP method for this request, usually GET or POST . \n \n \n \n .url - string \n \n The full URL for this request, e.g. https://latest.datasette.io/fixtures . \n \n \n \n .scheme - string \n \n The request scheme - usually https or http . \n \n \n \n .headers - dictionary (str -> str) \n \n A dictionary of incoming HTTP request headers. Header names have been converted to lowercase. \n \n \n \n .cookies - dictionary (str -> str) \n \n A dictionary of incoming cookies \n \n \n \n .host - string \n \n The host header from the incoming request, e.g. latest.datasette.io or localhost . \n \n \n \n .path - string \n \n The path of the request excluding the query string, e.g. /fixtures . \n \n \n \n .full_path - string \n \n The path of the request including the query string if one is present, e.g. /fixtures?sql=select+sqlite_version() . \n \n \n \n .query_string - string \n \n The query string component of the request, without the ? - e.g. name__contains=sam&age__gt=10 . \n \n \n \n .args - MultiParams \n \n An object representing the parsed query string parameters, see below. \n \n \n \n .url_vars - dictionary (str -> str) \n \n Variables extracted from the URL path, if that path was defined using a regular expression. See register_routes(datasette) . \n \n \n \n .actor - dictionary (str -> Any) or None \n \n The currently authenticated actor (see actors ), or None if the request is unauthenticated. \n \n \n \n The object also has two awaitable methods: \n \n \n await request.post_vars() - dictionary \n \n Returns a dictionary of form variables that were submitted in the request body via POST . Don't forget to read about CSRF protection ! \n \n \n \n await request.post_body() - bytes \n \n Returns the un-parsed body of a request submitted by POST - useful for things like incoming JSON data. \n \n \n \n And a class method that can be used to create fake request objects for use in tests: \n \n \n fake(path_with_query_string, method=\"GET\", scheme=\"http\", url_vars=None) \n \n Returns a Request instance for the specified path and method. For example: \n from datasette import Request\nfrom pprint import pprint\n\nrequest = Request.fake(\n \"/fixtures/facetable/\",\n url_vars={\"database\": \"fixtures\", \"table\": \"facetable\"},\n)\npprint(request.scope) \n This outputs: \n {'http_version': '1.1',\n 'method': 'GET',\n 'path': '/fixtures/facetable/',\n 'query_string': b'',\n 'raw_path': b'/fixtures/facetable/',\n 'scheme': 'http',\n 'type': 'http',\n 'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope\", \"label\": \"ASGI HTTP connection scope\"}]"} {"id": "plugin_hooks:plugin-hook-skip-csrf", "page": "plugin_hooks", "ref": "plugin-hook-skip-csrf", "title": "skip_csrf(datasette, scope)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n scope - dictionary \n \n The ASGI scope for the incoming HTTP request. \n \n \n \n This hook can be used to skip CSRF protection for a specific incoming request. For example, you might have a custom path at /submit-comment which is designed to accept comments from anywhere, whether or not the incoming request originated on the site and has an accompanying CSRF token. \n This example will disable CSRF protection for that specific URL path: \n from datasette import hookimpl\n\n\n@hookimpl\ndef skip_csrf(scope):\n return scope[\"path\"] == \"/submit-comment\" \n If any of the currently active skip_csrf() plugin hooks return True , CSRF protection will be skipped for the request.", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/en/latest/specs/www.html#http-connection-scope\", \"label\": \"ASGI scope\"}]"} {"id": "installation:installation-homebrew", "page": "installation", "ref": "installation-homebrew", "title": "Using Homebrew", "content": "If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal: \n brew install datasette \n This should install the latest version. You can confirm by running: \n datasette --version \n You can upgrade to the latest Homebrew packaged version using: \n brew upgrade datasette \n Once you have installed Datasette you can install plugins using the following: \n datasette install datasette-vega \n If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using: \n datasette install -U datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "spatialite:installing-spatialite-on-os-x", "page": "spatialite", "ref": "installing-spatialite-on-os-x", "title": "Installing SpatiaLite on OS X", "content": "The easiest way to install SpatiaLite on OS X is to use Homebrew . \n brew update\nbrew install spatialite-tools \n This will install the spatialite command-line tool and the mod_spatialite dynamic library. \n You can now run Datasette like so: \n datasette --load-extension=spatialite", "breadcrumbs": "[\"SpatiaLite\", \"Installation\"]", "references": "[{\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "plugin_hooks:plugin-hook-register-commands", "page": "plugin_hooks", "ref": "plugin-hook-register-commands", "title": "register_commands(cli)", "content": "cli - the root Datasette Click command group \n \n Use this to register additional CLI commands \n \n \n \n Register additional CLI commands that can be run using datsette yourcommand ... . This provides a mechanism by which plugins can add new CLI commands to Datasette. \n This example registers a new datasette verify file1.db file2.db command that checks if the provided file paths are valid SQLite databases: \n from datasette import hookimpl\nimport click\nimport sqlite3\n\n\n@hookimpl\ndef register_commands(cli):\n @cli.command()\n @click.argument(\n \"files\", type=click.Path(exists=True), nargs=-1\n )\n def verify(files):\n \"Verify that files can be opened by Datasette\"\n for file in files:\n conn = sqlite3.connect(str(file))\n try:\n conn.execute(\"select * from sqlite_master\")\n except sqlite3.DatabaseError:\n raise click.ClickException(\n \"Invalid database: {}\".format(file)\n ) \n The new command can then be executed like so: \n datasette verify fixtures.db \n Help text (from the docstring for the function plus any defined Click arguments or options) will become available using: \n datasette verify --help \n Plugins can register multiple commands by making multiple calls to the @cli.command() decorator. Consult the Click documentation for full details on how to build a CLI command, including how to define arguments and options. \n Note that register_commands() plugins cannot used with the --plugins-dir mechanism - they need to be installed into the same virtual environment as Datasette using pip install . Provided it has a setup.py file (see Packaging a plugin ) you can run pip install directly against the directory in which you are developing your plugin like so: \n pip install -e path/to/my/datasette-plugin \n Examples: datasette-auth-passwords , datasette-verify", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://click.palletsprojects.com/en/latest/commands/#callback-invocation\", \"label\": \"Click command group\"}, {\"href\": \"https://click.palletsprojects.com/\", \"label\": \"Click documentation\"}, {\"href\": \"https://datasette.io/plugins/datasette-auth-passwords\", \"label\": \"datasette-auth-passwords\"}, {\"href\": \"https://datasette.io/plugins/datasette-verify\", \"label\": \"datasette-verify\"}]"} {"id": "publish:publish-cloud-run", "page": "publish", "ref": "publish-cloud-run", "title": "Publishing to Google Cloud Run", "content": "Google Cloud Run allows you to publish data in a scale-to-zero environment, so your application will start running when the first request is received and will shut down again when traffic ceases. This means you only pay for time spent serving traffic. \n \n Cloud Run is a great option for inexpensively hosting small, low traffic projects - but costs can add up for projects that serve a lot of requests. \n Be particularly careful if your project has tables with large numbers of rows. Search engine crawlers that index a page for every row could result in a high bill. \n The datasette-block-robots plugin can be used to request search engine crawlers omit crawling your site, which can help avoid this issue. \n \n You will first need to install and configure the Google Cloud CLI tools by following these instructions . \n You can then publish one or more SQLite database files to Google Cloud Run using the following command: \n datasette publish cloudrun mydatabase.db --service=my-database \n A Cloud Run service is a single hosted application. The service name you specify will be used as part of the Cloud Run URL. If you deploy to a service name that you have used in the past your new deployment will replace the previous one. \n If you omit the --service option you will be asked to pick a service name interactively during the deploy. \n You may need to interact with prompts from the tool. Many of the prompts ask for values that can be set as properties for the Google Cloud SDK if you want to avoid the prompts. \n For example, the default region for the deployed instance can be set using the command: \n gcloud config set run/region us-central1 \n You should replace us-central1 with your desired region . Alternately, you can specify the region by setting the CLOUDSDK_RUN_REGION environment variable. \n Once it has finished it will output a URL like this one: \n Service [my-service] revision [my-service-00001] has been deployed\nand is serving traffic at https://my-service-j7hipcg4aq-uc.a.run.app \n Cloud Run provides a URL on the .run.app domain, but you can also point your own domain or subdomain at your Cloud Run service - see mapping custom domains in the Cloud Run documentation for details. \n See datasette publish cloudrun for the full list of options for this command.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://cloud.google.com/run/\", \"label\": \"Google Cloud Run\"}, {\"href\": \"https://datasette.io/plugins/datasette-block-robots\", \"label\": \"datasette-block-robots\"}, {\"href\": \"https://cloud.google.com/sdk/\", \"label\": \"these instructions\"}, {\"href\": \"https://cloud.google.com/sdk/docs/properties\", \"label\": \"set as properties for the Google Cloud SDK\"}, {\"href\": \"https://cloud.google.com/about/locations\", \"label\": \"region\"}, {\"href\": \"https://cloud.google.com/run/docs/mapping-custom-domains\", \"label\": \"mapping custom domains\"}]"} {"id": "changelog:v0-28-publish-cloudrun", "page": "changelog", "ref": "v0-28-publish-cloudrun", "title": "datasette publish cloudrun", "content": "Google Cloud Run is a brand new serverless hosting platform from Google, which allows you to build a Docker container which will run only when HTTP traffic is received and will shut down (and hence cost you nothing) the rest of the time. It's similar to Zeit's Now v1 Docker hosting platform which sadly is no longer accepting signups from new users. \n The new datasette publish cloudrun command was contributed by Romain Primet ( #434 ) and publishes selected databases to a new Datasette instance running on Google Cloud Run. \n See Publishing to Google Cloud Run for full documentation.", "breadcrumbs": "[\"Changelog\", \"0.28 (2019-05-19)\"]", "references": "[{\"href\": \"https://cloud.google.com/run/\", \"label\": \"Google Cloud Run\"}, {\"href\": \"https://hyperion.alpha.spectrum.chat/zeit/now/cannot-create-now-v1-deployments~d206a0d4-5835-4af5-bb5c-a17f0171fb25?m=MTU0Njk2NzgwODM3OA==\", \"label\": \"no longer accepting signups\"}, {\"href\": \"https://github.com/simonw/datasette/pull/434\", \"label\": \"#434\"}]"} {"id": "contributing:contributing-upgrading-codemirror", "page": "contributing", "ref": "contributing-upgrading-codemirror", "title": "Upgrading CodeMirror", "content": "Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: \n \n \n Install the packages with: \n npm i codemirror @codemirror/lang-sql \n \n \n Build the bundle using the version number from package.json with: \n node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \\\n -f iife \\\n -n cm \\\n -o datasette/static/cm-editor-6.0.1.bundle.js \\\n -p @rollup/plugin-node-resolve \\\n -p @rollup/plugin-terser \n \n \n Update the version reference in the codemirror.html template.", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://codemirror.net/\", \"label\": \"CodeMirror\"}, {\"href\": \"https://latest.datasette.io/fixtures\", \"label\": \"this page\"}]"} {"id": "facets:id1", "page": "facets", "ref": "id1", "title": "Facets", "content": "Datasette facets can be used to add a faceted browse interface to any database table.\n With facets, tables are displayed along with a summary showing the most common values in specified columns.\n These values can be selected to further filter the table. \n Here's an example : \n \n Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://congress-legislators.datasettes.com/legislators/legislator_terms?_facet=type&_facet=party&_facet=state&_facet_size=10\", \"label\": \"an example\"}]"} {"id": "changelog:id23", "page": "changelog", "ref": "id23", "title": "0.59.3 (2021-11-20)", "content": "Fixed numerous bugs when running Datasette behind a proxy with a prefix URL path using the base_url setting. A live demo of this mode is now available at datasette-apache-proxy-demo.datasette.io/prefix/ . ( #1519 , #838 ) \n \n \n ?column__arraycontains= and ?column__arraynotcontains= table parameters now also work against SQL views. ( #448 ) \n \n \n ?_facet_array=column no longer returns incorrect counts if columns contain the same value more than once.", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://datasette-apache-proxy-demo.datasette.io/prefix/\", \"label\": \"datasette-apache-proxy-demo.datasette.io/prefix/\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1519\", \"label\": \"#1519\"}, {\"href\": \"https://github.com/simonw/datasette/issues/838\", \"label\": \"#838\"}, {\"href\": \"https://github.com/simonw/datasette/issues/448\", \"label\": \"#448\"}]"} {"id": "ecosystem:ecosystem", "page": "ecosystem", "ref": "ecosystem", "title": "The Datasette Ecosystem", "content": "Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. \n These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. \n The Datasette project website includes a directory of plugins and a directory of tools: \n \n \n Plugins directory on datasette.io \n \n \n Tools directory on datasette.io", "breadcrumbs": "[]", "references": "[{\"href\": \"https://datasette.io/\", \"label\": \"Datasette project website\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Plugins directory on datasette.io\"}, {\"href\": \"https://datasette.io/tools\", \"label\": \"Tools directory on datasette.io\"}]"} {"id": "changelog:id37", "page": "changelog", "ref": "id37", "title": "0.53 (2020-12-10)", "content": "Datasette has an official project website now, at https://datasette.io/ . This release mainly updates the documentation to reflect the new site. \n \n \n New ?column__arraynotcontains= table filter. ( #1132 ) \n \n \n datasette serve has a new --create option, which will create blank database files if they do not already exist rather than exiting with an error. ( #1135 ) \n \n \n New ?_header=off option for CSV export which omits the CSV header row, documented here . ( #1133 ) \n \n \n \"Powered by Datasette\" link in the footer now links to https://datasette.io/ . ( #1138 ) \n \n \n Project news no longer lives in the README - it can now be found at https://datasette.io/news . ( #1137 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://datasette.io/\", \"label\": \"https://datasette.io/\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1132\", \"label\": \"#1132\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1135\", \"label\": \"#1135\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1133\", \"label\": \"#1133\"}, {\"href\": \"https://datasette.io/\", \"label\": \"https://datasette.io/\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1138\", \"label\": \"#1138\"}, {\"href\": \"https://datasette.io/news\", \"label\": \"https://datasette.io/news\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1137\", \"label\": \"#1137\"}]"} {"id": "installation:installation-datasette-desktop", "page": "installation", "ref": "installation-datasette-desktop", "title": "Datasette Desktop for Mac", "content": "Datasette Desktop is a packaged Mac application which bundles Datasette together with Python and allows you to install and run Datasette directly on your laptop. This is the best option for local installation if you are not comfortable using the command line.", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://datasette.io/desktop\", \"label\": \"Datasette Desktop\"}]"} {"id": "writing_plugins:writing-plugins-designing-urls", "page": "writing_plugins", "ref": "writing-plugins-designing-urls", "title": "Designing URLs for your plugin", "content": "You can register new URL routes within Datasette using the register_routes(datasette) plugin hook. \n Datasette's default URLs include these: \n \n \n /dbname - database page \n \n \n /dbname/tablename - table page \n \n \n /dbname/tablename/pk - row page \n \n \n See Pages and API endpoints and Introspection for more default URL routes. \n To avoid accidentally conflicting with a database file that may be loaded into Datasette, plugins should register URLs using a /-/ prefix. For example, if your plugin adds a new interface for uploading Excel files you might register a URL route like this one: \n \n \n /-/upload-excel \n \n \n Try to avoid registering URLs that clash with other plugins that your users might have installed. There is no central repository of reserved URL paths (yet) but you can review existing plugins by browsing the plugins directory . \n If your plugin includes functionality that relates to a specific database you could also register a URL route like this: \n \n \n /dbname/-/upload-excel \n \n \n Or for a specific table like this: \n \n \n /dbname/tablename/-/modify-table-schema \n \n \n Note that a row could have a primary key of - and this URL scheme will still work, because Datasette row pages do not ever have a trailing slash followed by additional path components.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[{\"href\": \"https://datasette.io/plugins\", \"label\": \"plugins directory\"}]"} {"id": "plugin_hooks:plugin-register-output-renderer", "page": "plugin_hooks", "ref": "plugin-register-output-renderer", "title": "register_output_renderer(datasette)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) \n \n \n \n Registers a new output renderer, to output data in a custom format. The hook function should return a dictionary, or a list of dictionaries, of the following shape: \n @hookimpl\ndef register_output_renderer(datasette):\n return {\n \"extension\": \"test\",\n \"render\": render_demo,\n \"can_render\": can_render_demo, # Optional\n } \n This will register render_demo to be called when paths with the extension .test (for example /database.test , /database/table.test , or /database/table/row.test ) are requested. \n render_demo is a Python function. It can be a regular function or an async def render_demo() awaitable function, depending on if it needs to make any asynchronous calls. \n can_render_demo is a Python function (or async def function) which accepts the same arguments as render_demo but just returns True or False . It lets Datasette know if the current SQL query can be represented by the plugin - and hence influence if a link to this output format is displayed in the user interface. If you omit the \"can_render\" key from the dictionary every query will be treated as being supported by the plugin. \n When a request is received, the \"render\" callback function is called with zero or more of the following arguments. Datasette will inspect your callback function and pass arguments that match its function signature. \n \n \n datasette - Datasette class \n \n For accessing plugin configuration and executing queries. \n \n \n \n columns - list of strings \n \n The names of the columns returned by this query. \n \n \n \n rows - list of sqlite3.Row objects \n \n The rows returned by the query. \n \n \n \n sql - string \n \n The SQL query that was executed. \n \n \n \n query_name - string or None \n \n If this was the execution of a canned query , the name of that query. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string or None \n \n The table or view, if one is being rendered. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n error - string or None \n \n If an error occurred this string will contain the error message. \n \n \n \n truncated - bool or None \n \n If the query response was truncated - for example a SQL query returning more than 1,000 results where pagination is not available - this will be True . \n \n \n \n view_name - string \n \n The name of the current view being called. index , database , table , and row are the most important ones. \n \n \n \n The callback function can return None , if it is unable to render the data, or a Response class that will be returned to the caller. \n It can also return a dictionary with the following keys. This format is deprecated as-of Datasette 0.49 and will be removed by Datasette 1.0. \n \n \n body - string or bytes, optional \n \n The response body, default empty \n \n \n \n content_type - string, optional \n \n The Content-Type header, default text/plain \n \n \n \n status_code - integer, optional \n \n The HTTP status code, default 200 \n \n \n \n headers - dictionary, optional \n \n Extra HTTP headers to be returned in the response. \n \n \n \n An example of an output renderer callback function: \n def render_demo():\n return Response.text(\"Hello World\") \n Here is a more complex example: \n async def render_demo(datasette, columns, rows):\n db = datasette.get_database()\n result = await db.execute(\"select sqlite_version()\")\n first_row = \" | \".join(columns)\n lines = [first_row]\n lines.append(\"=\" * len(first_row))\n for row in rows:\n lines.append(\" | \".join(row))\n return Response(\n \"\\n\".join(lines),\n content_type=\"text/plain; charset=utf-8\",\n headers={\"x-sqlite-version\": result.first()[0]},\n ) \n And here is an example can_render function which returns True only if the query results contain the columns atom_id , atom_title and atom_updated : \n def can_render_demo(columns):\n return {\n \"atom_id\",\n \"atom_title\",\n \"atom_updated\",\n }.issubset(columns) \n Examples: datasette-atom , datasette-ics , datasette-geojson , datasette-copyable", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-atom\", \"label\": \"datasette-atom\"}, {\"href\": \"https://datasette.io/plugins/datasette-ics\", \"label\": \"datasette-ics\"}, {\"href\": \"https://datasette.io/plugins/datasette-geojson\", \"label\": \"datasette-geojson\"}, {\"href\": \"https://datasette.io/plugins/datasette-copyable\", \"label\": \"datasette-copyable\"}]"} {"id": "plugin_hooks:plugin-register-routes", "page": "plugin_hooks", "ref": "plugin-register-routes", "title": "register_routes(datasette)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) \n \n \n \n Register additional view functions to execute for specified URL routes. \n Return a list of (regex, view_function) pairs, something like this: \n from datasette import hookimpl, Response\nimport html\n\n\nasync def hello_from(request):\n name = request.url_vars[\"name\"]\n return Response.html(\n \"Hello from {}\".format(html.escape(name))\n )\n\n\n@hookimpl\ndef register_routes():\n return [(r\"^/hello-from/(?P.*)$\", hello_from)] \n The view functions can take a number of different optional arguments. The corresponding argument will be passed to your function depending on its named parameters - a form of dependency injection. \n The optional view function arguments are as follows: \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n scope - dictionary \n \n The incoming ASGI scope dictionary. \n \n \n \n send - function \n \n The ASGI send function. \n \n \n \n receive - function \n \n The ASGI receive function. \n \n \n \n The view function can be a regular function or an async def function, depending on if it needs to use any await APIs. \n The function can either return a Response class or it can return nothing and instead respond directly to the request using the ASGI send function (for advanced uses only). \n It can also raise the datasette.NotFound exception to return a 404 not found error, or the datasette.Forbidden exception for a 403 forbidden. \n See Designing URLs for your plugin for tips on designing the URL routes used by your plugin. \n Examples: datasette-auth-github , datasette-psutil", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-auth-github\", \"label\": \"datasette-auth-github\"}, {\"href\": \"https://datasette.io/plugins/datasette-psutil\", \"label\": \"datasette-psutil\"}]"} {"id": "changelog:v1-0-a4", "page": "changelog", "ref": "v1-0-a4", "title": "1.0a4 (2023-08-21)", "content": "This alpha fixes a security issue with the /-/api API explorer. On authenticated Datasette instances (instances protected using plugins such as datasette-auth-passwords ) the API explorer interface could reveal the names of databases and tables within the protected instance. The data stored in those tables was not revealed. \n For more information and workarounds, read the security advisory . The issue has been present in every previous alpha version of Datasette 1.0: versions 1.0a0, 1.0a1, 1.0a2 and 1.0a3. \n Also in this alpha: \n \n \n The new datasette plugins --requirements option outputs a list of currently installed plugins in Python requirements.txt format, useful for duplicating that installation elsewhere. ( #2133 ) \n \n \n Writable canned queries can now define a on_success_message_sql field in their configuration, containing a SQL query that should be executed upon successful completion of the write operation in order to generate a message to be shown to the user. ( #2138 ) \n \n \n The automatically generated border color for a database is now shown in more places around the application. ( #2119 ) \n \n \n Every instance of example shell script code in the documentation should now include a working copy button, free from additional syntax. ( #2140 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-auth-passwords\", \"label\": \"datasette-auth-passwords\"}, {\"href\": \"https://github.com/simonw/datasette/security/advisories/GHSA-7ch3-7pp7-7cpq\", \"label\": \"the security advisory\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2133\", \"label\": \"#2133\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2138\", \"label\": \"#2138\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2119\", \"label\": \"#2119\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2140\", \"label\": \"#2140\"}]"} {"id": "plugin_hooks:plugin-hook-actor-from-request", "page": "plugin_hooks", "ref": "plugin-hook-actor-from-request", "title": "actor_from_request(datasette, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n This is part of Datasette's authentication and permissions system . The function should attempt to authenticate an actor (either a user or an API actor of some sort) based on information in the request. \n If it cannot authenticate an actor, it should return None . Otherwise it should return a dictionary representing that actor. \n Here's an example that authenticates the actor based on an incoming API key: \n from datasette import hookimpl\nimport secrets\n\nSECRET_KEY = \"this-is-a-secret\"\n\n\n@hookimpl\ndef actor_from_request(datasette, request):\n authorization = (\n request.headers.get(\"authorization\") or \"\"\n )\n expected = \"Bearer {}\".format(SECRET_KEY)\n\n if secrets.compare_digest(authorization, expected):\n return {\"id\": \"bot\"} \n If you install this in your plugins directory you can test it like this: \n curl -H 'Authorization: Bearer this-is-a-secret' http://localhost:8003/-/actor.json \n Instead of returning a dictionary, this function can return an awaitable function which itself returns either None or a dictionary. This is useful for authentication functions that need to make a database query - for example: \n from datasette import hookimpl\n\n\n@hookimpl\ndef actor_from_request(datasette, request):\n async def inner():\n token = request.args.get(\"_token\")\n if not token:\n return None\n # Look up ?_token=xxx in sessions table\n result = await datasette.get_database().execute(\n \"select count(*) from sessions where token = ?\",\n [token],\n )\n if result.first()[0]:\n return {\"token\": token}\n else:\n return None\n\n return inner \n Examples: datasette-auth-tokens , datasette-auth-passwords", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-auth-tokens\", \"label\": \"datasette-auth-tokens\"}, {\"href\": \"https://datasette.io/plugins/datasette-auth-passwords\", \"label\": \"datasette-auth-passwords\"}]"} {"id": "configuration:configuration-cli", "page": "configuration", "ref": "configuration-cli", "title": "Configuration via the command-line", "content": "The recommended way to configure Datasette is using a datasette.yaml file passed to -c/--config . You can also pass individual settings to Datasette using the -s/--setting option, which can be used multiple times: \n datasette mydatabase.db \\\n --setting settings.default_page_size 50 \\\n --setting settings.sql_time_limit_ms 3500 \n This option takes dotted-notation for the first argument and a value for the second argument. This means you can use it to set any configuration value that would be valid in a datasette.yaml file. \n It also works for plugin configuration, for example for datasette-cluster-map : \n datasette mydatabase.db \\\n --setting plugins.datasette-cluster-map.latitude_column xlat \\\n --setting plugins.datasette-cluster-map.longitude_column xlon \n If the value you provide is a valid JSON object or list it will be treated as nested data, allowing you to configure plugins that accept lists such as datasette-proxy-url : \n datasette mydatabase.db \\\n -s plugins.datasette-proxy-url.paths '[{\"path\": \"/proxy\", \"backend\": \"http://example.com/\"}]' \n This is equivalent to a datasette.yaml file containing the following: \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n plugins:\n datasette-proxy-url:\n paths:\n - path: /proxy\n backend: http://example.com/\n \"\"\").strip()\n ) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Configuration\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://datasette.io/plugins/datasette-proxy-url\", \"label\": \"datasette-proxy-url\"}]"} {"id": "cli-reference:cli-help-install-help", "page": "cli-reference", "ref": "cli-help-install-help", "title": "datasette install", "content": "Install new Datasette plugins. This command works like pip install but ensures that your plugins will be installed into the same environment as Datasette. \n This command: \n datasette install datasette-cluster-map \n Would install the datasette-cluster-map plugin. \n [[[cog\nhelp([\"install\", \"--help\"]) \n ]]] \n Usage: datasette install [OPTIONS] [PACKAGES]...\n\n Install plugins and packages from PyPI into the same environment as Datasette\n\nOptions:\n -U, --upgrade Upgrade packages to latest version\n -r, --requirement PATH Install from requirements file\n -e, --editable TEXT Install a project in editable mode from this path\n --help Show this message and exit. \n [[[end]]]", "breadcrumbs": "[\"CLI reference\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}]"} {"id": "plugin_hooks:plugin-hook-extra-body-script", "page": "plugin_hooks", "ref": "plugin-hook-extra-body-script", "title": "extra_body_script(template, database, table, columns, view_name, request, datasette)", "content": "Extra JavaScript to be added to a element: \n @hookimpl\ndef extra_body_script():\n return {\n \"module\": True,\n \"script\": \"console.log('Your JavaScript goes here...')\",\n } \n This will add the following to the end of your page: \n \n Example: datasette-cluster-map", "breadcrumbs": "[\"Plugin hooks\", \"Page extras\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}]"} {"id": "plugin_hooks:plugin-hook-query-actions", "page": "plugin_hooks", "ref": "plugin-hook-query-actions", "title": "query_actions(datasette, actor, database, query_name, request, sql, params)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n database - string \n \n The name of the database. \n \n \n \n query_name - string or None \n \n The name of the canned query, or None if this is an arbitrary SQL query. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n sql - string \n \n The SQL query being executed \n \n \n \n params - dictionary \n \n The parameters passed to the SQL query, if any. \n \n \n \n Populates a \"Query actions\" menu on the canned query and arbitrary SQL query pages. \n This example adds a new query action linking to a page for explaining a query: \n from datasette import hookimpl\nimport urllib\n\n\n@hookimpl\ndef query_actions(datasette, database, query_name, sql):\n # Don't explain an explain\n if sql.lower().startswith(\"explain\"):\n return\n return [\n {\n \"href\": datasette.urls.database(database)\n + \"?\"\n + urllib.parse.urlencode(\n {\n \"sql\": \"explain \" + sql,\n }\n ),\n \"label\": \"Explain this query\",\n \"description\": \"Get a summary of how SQLite executes the query\",\n },\n ] \n Example: datasette-create-view", "breadcrumbs": "[\"Plugin hooks\", \"Action hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-create-view\", \"label\": \"datasette-create-view\"}]"} {"id": "plugin_hooks:plugin-hook-row-actions", "page": "plugin_hooks", "ref": "plugin-hook-row-actions", "title": "row_actions(datasette, actor, request, database, table, row)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n request - Request object or None \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string \n \n The name of the table. \n \n \n \n row - sqlite.Row \n \n The SQLite row object being displayed on the page. \n \n \n \n Return links for the \"Row actions\" menu shown at the top of the row page. \n This example displays the row in JSON plus some additional debug information if the user is signed in: \n from datasette import hookimpl\n\n\n@hookimpl\ndef row_actions(datasette, database, table, actor, row):\n if actor:\n return [\n {\n \"href\": datasette.urls.instance(),\n \"label\": f\"Row details for {actor['id']}\",\n \"description\": json.dumps(\n dict(row), default=repr\n ),\n },\n ] \n Example: datasette-enrichments", "breadcrumbs": "[\"Plugin hooks\", \"Action hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-enrichments\", \"label\": \"datasette-enrichments\"}]"} {"id": "plugin_hooks:plugin-hook-track-event", "page": "plugin_hooks", "ref": "plugin-hook-track-event", "title": "track_event(datasette, event)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n event - Event \n \n Information about the event, represented as an instance of a subclass of the Event base class. \n \n \n \n This hook will be called any time an event is tracked by code that calls the datasette.track_event(...) internal method. \n The event object will always have the following properties: \n \n \n name : a string representing the name of the event, for example logout or create-table . \n \n \n actor : a dictionary representing the actor that triggered the event, or None if the event was not triggered by an actor. \n \n \n created : a datatime.datetime object in the timezone.utc timezone representing the time the event object was created. \n \n \n Other properties on the event will be available depending on the type of event. You can also access those as a dictionary using event.properties() . \n The events fired by Datasette core are documented here . \n This example plugin logs details of all events to standard error: \n from datasette import hookimpl\nimport json\nimport sys\n\n\n@hookimpl\ndef track_event(event):\n name = event.name\n actor = event.actor\n properties = event.properties()\n msg = json.dumps(\n {\n \"name\": name,\n \"actor\": actor,\n \"properties\": properties,\n }\n )\n print(msg, file=sys.stderr, flush=True) \n The function can also return an async function which will be awaited. This is useful for writing to a database. \n This example logs events to a datasette_events table in a database called events . It uses the startup() hook to create that table if it does not exist. \n from datasette import hookimpl\nimport json\n\n@hookimpl\ndef startup(datasette):\n async def inner():\n db = datasette.get_database(\"events\")\n await db.execute_write(\n \"\"\"\n create table if not exists datasette_events (\n id integer primary key,\n event_type text,\n created text,\n actor text,\n properties text\n )\n \"\"\"\n )\n\n return inner\n\n\n@hookimpl\ndef track_event(datasette, event):\n async def inner():\n db = datasette.get_database(\"events\")\n properties = event.properties()\n await db.execute_write(\n \"\"\"\n insert into datasette_events (event_type, created, actor, properties)\n values (?, strftime('%Y-%m-%d %H:%M:%S', 'now'), ?, ?)\n \"\"\",\n (event.name, json.dumps(event.actor), json.dumps(properties)),\n )\n\n return inner \n Example: datasette-events-db", "breadcrumbs": "[\"Plugin hooks\", \"Event tracking\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-events-db\", \"label\": \"datasette-events-db\"}]"} {"id": "plugin_hooks:plugin-hook-database-actions", "page": "plugin_hooks", "ref": "plugin-hook-database-actions", "title": "database_actions(datasette, actor, database, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n database - string \n \n The name of the database. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n Populates an actions menu on the database page. \n This example adds a new database action for creating a table, if the user has the edit-schema permission: \n from datasette import hookimpl\n\n\n@hookimpl\ndef database_actions(datasette, actor, database):\n async def inner():\n if not await datasette.permission_allowed(\n actor,\n \"edit-schema\",\n resource=database,\n default=False,\n ):\n return []\n return [\n {\n \"href\": datasette.urls.path(\n \"/-/edit-schema/{}/-/create\".format(\n database\n )\n ),\n \"label\": \"Create a table\",\n }\n ]\n\n return inner \n Example: datasette-graphql , datasette-edit-schema", "breadcrumbs": "[\"Plugin hooks\", \"Action hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-graphql\", \"label\": \"datasette-graphql\"}, {\"href\": \"https://datasette.io/plugins/datasette-edit-schema\", \"label\": \"datasette-edit-schema\"}]"} {"id": "plugin_hooks:plugin-hook-table-actions", "page": "plugin_hooks", "ref": "plugin-hook-table-actions", "title": "table_actions(datasette, actor, database, table, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string \n \n The name of the table. \n \n \n \n request - Request object or None \n \n The current HTTP request. This can be None if the request object is not available. \n \n \n \n This example adds a new table action if the signed in user is \"root\" : \n from datasette import hookimpl\n\n\n@hookimpl\ndef table_actions(datasette, actor, database, table):\n if actor and actor.get(\"id\") == \"root\":\n return [\n {\n \"href\": datasette.urls.path(\n \"/-/edit-schema/{}/{}\".format(\n database, table\n )\n ),\n \"label\": \"Edit schema for this table\",\n \"description\": \"Add, remove, rename or alter columns for this table.\",\n }\n ] \n Example: datasette-graphql", "breadcrumbs": "[\"Plugin hooks\", \"Action hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-graphql\", \"label\": \"datasette-graphql\"}]"} {"id": "changelog:id17", "page": "changelog", "ref": "id17", "title": "0.61.1 (2022-03-23)", "content": "Fixed a bug where databases with a different route from their name (as used by the datasette-hashed-urls plugin ) returned errors when executing custom SQL queries. ( #1682 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-hashed-urls\", \"label\": \"datasette-hashed-urls plugin\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1682\", \"label\": \"#1682\"}]"} {"id": "performance:performance-hashed-urls", "page": "performance", "ref": "performance-hashed-urls", "title": "datasette-hashed-urls", "content": "If you open a database file in immutable mode using the -i option, you can be assured that the content of that database will not change for the lifetime of the Datasette server. \n The datasette-hashed-urls plugin implements an optimization where your database is served with part of the SHA-256 hash of the database contents baked into the URL. \n A database at /fixtures will instead be served at /fixtures-aa7318b , and a year-long cache expiry header will be returned with those pages. \n This will then be cached by both browsers and caching proxies such as Cloudflare or Fastly, providing a potentially significant performance boost. \n To install the plugin, run the following: \n datasette install datasette-hashed-urls \n \n Prior to Datasette 0.61 hashed URL mode was a core Datasette feature, enabled using the hash_urls setting. This implementation has now been removed in favor of the datasette-hashed-urls plugin. \n Prior to Datasette 0.28 hashed URL mode was the default behaviour for Datasette, since all database files were assumed to be immutable and unchanging. From 0.28 onwards the default has been to treat database files as mutable unless explicitly configured otherwise.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-hashed-urls\", \"label\": \"datasette-hashed-urls plugin\"}]"} {"id": "plugin_hooks:plugin-hook-filters-from-request", "page": "plugin_hooks", "ref": "plugin-hook-filters-from-request", "title": "filters_from_request(request, database, table, datasette)", "content": "request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string \n \n The name of the table. \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n This hook runs on the table page, and can influence the where clause of the SQL query used to populate that page, based on query string arguments on the incoming request. \n The hook should return an instance of datasette.filters.FilterArguments which has one required and three optional arguments: \n return FilterArguments(\n where_clauses=[\"id > :max_id\"],\n params={\"max_id\": 5},\n human_descriptions=[\"max_id is greater than 5\"],\n extra_context={},\n) \n The arguments to the FilterArguments class constructor are as follows: \n \n \n where_clauses - list of strings, required \n \n A list of SQL fragments that will be inserted into the SQL query, joined by the and operator. These can include :named parameters which will be populated using data in params . \n \n \n \n params - dictionary, optional \n \n Additional keyword arguments to be used when the query is executed. These should match any :arguments in the where clauses. \n \n \n \n human_descriptions - list of strings, optional \n \n These strings will be included in the human-readable description at the top of the page and the page . \n \n \n \n extra_context - dictionary, optional \n \n Additional context variables that should be made available to the table.html template when it is rendered. \n \n \n \n This example plugin causes 0 results to be returned if ?_nothing=1 is added to the URL: \n from datasette import hookimpl\nfrom datasette.filters import FilterArguments\n\n\n@hookimpl\ndef filters_from_request(self, request):\n if request.args.get(\"_nothing\"):\n return FilterArguments(\n [\"1 = 0\"], human_descriptions=[\"NOTHING\"]\n ) \n Example: datasette-leaflet-freedraw", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-leaflet-freedraw\", \"label\": \"datasette-leaflet-freedraw\"}]"} {"id": "plugin_hooks:plugin-hook-permission-allowed", "page": "plugin_hooks", "ref": "plugin-hook-permission-allowed", "title": "permission_allowed(datasette, actor, action, resource)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary \n \n The current actor, as decided by actor_from_request(datasette, request) . \n \n \n \n action - string \n \n The action to be performed, e.g. \"edit-table\" . \n \n \n \n resource - string or None \n \n An identifier for the individual resource, e.g. the name of the table. \n \n \n \n Called to check that an actor has permission to perform an action on a resource. Can return True if the action is allowed, False if the action is not allowed or None if the plugin does not have an opinion one way or the other. \n Here's an example plugin which randomly selects if a permission should be allowed or denied, except for view-instance which always uses the default permission scheme instead. \n from datasette import hookimpl\nimport random\n\n\n@hookimpl\ndef permission_allowed(action):\n if action != \"view-instance\":\n # Return True or False at random\n return random.random() > 0.5\n # Returning None falls back to default permissions \n This function can alternatively return an awaitable function which itself returns True , False or None . You can use this option if you need to execute additional database queries using await datasette.execute(...) . \n Here's an example that allows users to view the admin_log table only if their actor id is present in the admin_users table. It aso disallows arbitrary SQL queries for the staff.db database for all users. \n @hookimpl\ndef permission_allowed(datasette, actor, action, resource):\n async def inner():\n if action == \"execute-sql\" and resource == \"staff\":\n return False\n if action == \"view-table\" and resource == (\n \"staff\",\n \"admin_log\",\n ):\n if not actor:\n return False\n user_id = actor[\"id\"]\n return await datasette.get_database(\n \"staff\"\n ).execute(\n \"select count(*) from admin_users where user_id = :user_id\",\n {\"user_id\": user_id},\n )\n\n return inner \n See built-in permissions for a full list of permissions that are included in Datasette core. \n Example: datasette-permissions-sql", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-permissions-sql\", \"label\": \"datasette-permissions-sql\"}]"} {"id": "plugin_hooks:plugin-hook-render-cell", "page": "plugin_hooks", "ref": "plugin-hook-render-cell", "title": "render_cell(row, value, column, table, database, datasette, request)", "content": "Lets you customize the display of values within table cells in the HTML table view. \n \n \n row - sqlite.Row \n \n The SQLite row object that the value being rendered is part of \n \n \n \n value - string, integer, float, bytes or None \n \n The value that was loaded from the database \n \n \n \n column - string \n \n The name of the column being rendered \n \n \n \n table - string or None \n \n The name of the table - or None if this is a custom SQL query \n \n \n \n database - string \n \n The name of the database \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n request - Request object \n \n The current request object \n \n \n \n If your hook returns None , it will be ignored. Use this to indicate that your hook is not able to custom render this particular value. \n If the hook returns a string, that string will be rendered in the table cell. \n If you want to return HTML markup you can do so by returning a jinja2.Markup object. \n You can also return an awaitable function which returns a value. \n Datasette will loop through all available render_cell hooks and display the value returned by the first one that does not return None . \n Here is an example of a custom render_cell() plugin which looks for values that are a JSON string matching the following format: \n {\"href\": \"https://www.example.com/\", \"label\": \"Name\"} \n If the value matches that pattern, the plugin returns an HTML link element: \n from datasette import hookimpl\nimport markupsafe\nimport json\n\n\n@hookimpl\ndef render_cell(value):\n # Render {\"href\": \"...\", \"label\": \"...\"} as link\n if not isinstance(value, str):\n return None\n stripped = value.strip()\n if not (\n stripped.startswith(\"{\") and stripped.endswith(\"}\")\n ):\n return None\n try:\n data = json.loads(value)\n except ValueError:\n return None\n if not isinstance(data, dict):\n return None\n if set(data.keys()) != {\"href\", \"label\"}:\n return None\n href = data[\"href\"]\n if not (\n href.startswith(\"/\")\n or href.startswith(\"http://\")\n or href.startswith(\"https://\")\n ):\n return None\n return markupsafe.Markup(\n '<a href=\"{href}\">{label}</a>'.format(\n href=markupsafe.escape(data[\"href\"]),\n label=markupsafe.escape(data[\"label\"] or \"\")\n or \" \",\n )\n ) \n Examples: datasette-render-binary , datasette-render-markdown , datasette-json-html", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-render-binary\", \"label\": \"datasette-render-binary\"}, {\"href\": \"https://datasette.io/plugins/datasette-render-markdown\", \"label\": \"datasette-render-markdown\"}, {\"href\": \"https://datasette.io/plugins/datasette-json-html\", \"label\": \"datasette-json-html\"}]"} {"id": "plugin_hooks:plugin-hook-startup", "page": "plugin_hooks", "ref": "plugin-hook-startup", "title": "startup(datasette)", "content": "This hook fires when the Datasette application server first starts up. \n Here is an example that validates required plugin configuration. The server will fail to start and show an error if the validation check fails: \n @hookimpl\ndef startup(datasette):\n config = datasette.plugin_config(\"my-plugin\") or {}\n assert (\n \"required-setting\" in config\n ), \"my-plugin requires setting required-setting\" \n You can also return an async function, which will be awaited on startup. Use this option if you need to execute any database queries, for example this function which creates the my_table database table if it does not yet exist: \n @hookimpl\ndef startup(datasette):\n async def inner():\n db = datasette.get_database()\n if \"my_table\" not in await db.table_names():\n await db.execute_write(\n \"\"\"\n create table my_table (mycol text)\n \"\"\"\n )\n\n return inner \n Potential use-cases: \n \n \n Run some initialization code for the plugin \n \n \n Create database tables that a plugin needs on startup \n \n \n Validate the configuration for a plugin on startup, and raise an error if it is invalid \n \n \n \n If you are writing unit tests for a plugin that uses this hook and doesn't exercise Datasette by sending\n any simulated requests through it you will need to explicitly call await ds.invoke_startup() in your tests. An example: \n @pytest.mark.asyncio\nasync def test_my_plugin():\n ds = Datasette()\n await ds.invoke_startup()\n # Rest of test goes here \n \n Examples: datasette-saved-queries , datasette-init", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-saved-queries\", \"label\": \"datasette-saved-queries\"}, {\"href\": \"https://datasette.io/plugins/datasette-init\", \"label\": \"datasette-init\"}]"} {"id": "plugin_hooks:plugin-hook-canned-queries", "page": "plugin_hooks", "ref": "plugin-hook-canned-queries", "title": "canned_queries(datasette, database, actor)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n Use this hook to return a dictionary of additional canned query definitions for the specified database. The return value should be the same shape as the JSON described in the canned query documentation. \n from datasette import hookimpl\n\n\n@hookimpl\ndef canned_queries(datasette, database):\n if database == \"mydb\":\n return {\n \"my_query\": {\n \"sql\": \"select * from my_table where id > :min_id\"\n }\n } \n The hook can alternatively return an awaitable function that returns a list. Here's an example that returns queries that have been stored in the saved_queries database table, if one exists: \n from datasette import hookimpl\n\n\n@hookimpl\ndef canned_queries(datasette, database):\n async def inner():\n db = datasette.get_database(database)\n if await db.table_exists(\"saved_queries\"):\n results = await db.execute(\n \"select name, sql from saved_queries\"\n )\n return {\n result[\"name\"]: {\"sql\": result[\"sql\"]}\n for result in results\n }\n\n return inner \n The actor parameter can be used to include the currently authenticated actor in your decision. Here's an example that returns saved queries that were saved by that actor: \n from datasette import hookimpl\n\n\n@hookimpl\ndef canned_queries(datasette, database, actor):\n async def inner():\n db = datasette.get_database(database)\n if actor is not None and await db.table_exists(\n \"saved_queries\"\n ):\n results = await db.execute(\n \"select name, sql from saved_queries where actor_id = :id\",\n {\"id\": actor[\"id\"]},\n )\n return {\n result[\"name\"]: {\"sql\": result[\"sql\"]}\n for result in results\n }\n\n return inner \n Example: datasette-saved-queries", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-saved-queries\", \"label\": \"datasette-saved-queries\"}]"} {"id": "plugin_hooks:plugin-hook-menu-links", "page": "plugin_hooks", "ref": "plugin-hook-menu-links", "title": "menu_links(datasette, actor, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n request - Request object or None \n \n The current HTTP request. This can be None if the request object is not available. \n \n \n \n This hook allows additional items to be included in the menu displayed by Datasette's top right menu icon. \n The hook should return a list of {\"href\": \"...\", \"label\": \"...\"} menu items. These will be added to the menu. \n It can alternatively return an async def awaitable function which returns a list of menu items. \n This example adds a new menu item but only if the signed in user is \"root\" : \n from datasette import hookimpl\n\n\n@hookimpl\ndef menu_links(datasette, actor):\n if actor and actor.get(\"id\") == \"root\":\n return [\n {\n \"href\": datasette.urls.path(\n \"/-/edit-schema\"\n ),\n \"label\": \"Edit schema\",\n },\n ] \n Using datasette.urls here ensures that links in the menu will take the base_url setting into account. \n Examples: datasette-search-all , datasette-graphql", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-search-all\", \"label\": \"datasette-search-all\"}, {\"href\": \"https://datasette.io/plugins/datasette-graphql\", \"label\": \"datasette-graphql\"}]"} {"id": "getting_started:getting-started-tutorial", "page": "getting_started", "ref": "getting-started-tutorial", "title": "Follow a tutorial", "content": "Datasette has several tutorials to help you get started with the tool. Try one of the following: \n \n \n Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database. \n \n \n Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data. \n \n \n Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette.", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://datasette.io/tutorials\", \"label\": \"tutorials\"}, {\"href\": \"https://datasette.io/tutorials/explore\", \"label\": \"Exploring a database with Datasette\"}, {\"href\": \"https://datasette.io/tutorials/learn-sql\", \"label\": \"Learn SQL with Datasette\"}, {\"href\": \"https://datasette.io/tutorials/clean-data\", \"label\": \"Cleaning data with sqlite-utils and Datasette\"}, {\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}]"} {"id": "changelog:id12", "page": "changelog", "ref": "id12", "title": "Documentation", "content": "New tutorial: Cleaning data with sqlite-utils and Datasette . \n \n \n Screenshots in the documentation are now maintained using shot-scraper , as described in Automating screenshots for the Datasette documentation using shot-scraper . ( #1844 ) \n \n \n More detailed command descriptions on the CLI reference page. ( #1787 ) \n \n \n New documentation on Running Datasette using OpenRC - thanks, Adam Simpson. ( #1825 )", "breadcrumbs": "[\"Changelog\", \"0.63 (2022-10-27)\"]", "references": "[{\"href\": \"https://datasette.io/tutorials/clean-data\", \"label\": \"Cleaning data with sqlite-utils and Datasette\"}, {\"href\": \"https://shot-scraper.datasette.io/\", \"label\": \"shot-scraper\"}, {\"href\": \"https://simonwillison.net/2022/Oct/14/automating-screenshots/\", \"label\": \"Automating screenshots for the Datasette documentation using shot-scraper\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1844\", \"label\": \"#1844\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1787\", \"label\": \"#1787\"}, {\"href\": \"https://github.com/simonw/datasette/pull/1825\", \"label\": \"#1825\"}]"} {"id": "internals:internals-tilde-encoding", "page": "internals", "ref": "internals-tilde-encoding", "title": "Tilde encoding", "content": "Datasette uses a custom encoding scheme in some places, called tilde encoding . This is primarily used for table names and row primary keys, to avoid any confusion between / characters in those values and the Datasette URLs that reference them. \n Tilde encoding uses the same algorithm as URL percent-encoding , but with the ~ tilde character used in place of % . \n Any character other than ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz0123456789_- will be replaced by the numeric equivalent preceded by a tilde. For example: \n \n \n / becomes ~2F \n \n \n . becomes ~2E \n \n \n % becomes ~25 \n \n \n ~ becomes ~7E \n \n \n Space becomes + \n \n \n polls/2022.primary becomes polls~2F2022~2Eprimary \n \n \n Note that the space character is a special case: it will be replaced with a + symbol. \n \n \n \n datasette.utils. tilde_encode s : str str \n \n Returns tilde-encoded string - for example /foo/bar -> ~2Ffoo~2Fbar \n \n \n \n \n \n datasette.utils. tilde_decode s : str str \n \n Decodes a tilde-encoded string, so ~2Ffoo~2Fbar -> /foo/bar", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Glossary/percent-encoding\", \"label\": \"URL percent-encoding\"}]"} {"id": "plugin_hooks:plugin-hook-extra-js-urls", "page": "plugin_hooks", "ref": "plugin-hook-extra-js-urls", "title": "extra_js_urls(template, database, table, columns, view_name, request, datasette)", "content": "This takes the same arguments as extra_template_vars(...) \n This works in the same way as extra_css_urls() but for JavaScript. You can\n return a list of URLs, a list of dictionaries or an awaitable function that returns those things: \n from datasette import hookimpl\n\n\n@hookimpl\ndef extra_js_urls():\n return [\n {\n \"url\": \"https://code.jquery.com/jquery-3.3.1.slim.min.js\",\n \"sri\": \"sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo\",\n }\n ] \n You can also return URLs to files from your plugin's static/ directory, if\n you have one: \n @hookimpl\ndef extra_js_urls():\n return [\"/-/static-plugins/your-plugin/app.js\"] \n Note that your-plugin here should be the hyphenated plugin name - the name that is displayed in the list on the /-/plugins debug page. \n If your code uses JavaScript modules you should include the \"module\": True key. See Custom CSS and JavaScript for more details. \n @hookimpl\ndef extra_js_urls():\n return [\n {\n \"url\": \"/-/static-plugins/your-plugin/app.js\",\n \"module\": True,\n }\n ] \n Examples: datasette-cluster-map , datasette-vega", "breadcrumbs": "[\"Plugin hooks\", \"Page extras\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules\", \"label\": \"JavaScript modules\"}, {\"href\": \"https://datasette.io/plugins/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://datasette.io/plugins/datasette-vega\", \"label\": \"datasette-vega\"}]"} {"id": "changelog:javascript-modules", "page": "changelog", "ref": "javascript-modules", "title": "JavaScript modules", "content": "JavaScript modules were introduced in ECMAScript 2015 and provide native browser support for the import and export keywords. \n To use modules, JavaScript needs to be included in <script> tags with a type=\"module\" attribute. \n Datasette now has the ability to output <script type=\"module\"> in places where you may wish to take advantage of modules. The extra_js_urls option described in Custom CSS and JavaScript can now be used with modules, and module support is also available for the extra_body_script() plugin hook. ( #1186 , #1187 ) \n datasette-leaflet-freedraw is the first example of a Datasette plugin that takes advantage of the new support for JavaScript modules. See Drawing shapes on a map to query a SpatiaLite database for more on this plugin.", "breadcrumbs": "[\"Changelog\", \"0.54 (2021-01-25)\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules\", \"label\": \"JavaScript modules\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1186\", \"label\": \"#1186\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1187\", \"label\": \"#1187\"}, {\"href\": \"https://datasette.io/plugins/datasette-leaflet-freedraw\", \"label\": \"datasette-leaflet-freedraw\"}, {\"href\": \"https://simonwillison.net/2021/Jan/24/drawing-shapes-spatialite/\", \"label\": \"Drawing shapes on a map to query a SpatiaLite database\"}]"} {"id": "changelog:id52", "page": "changelog", "ref": "id52", "title": "0.48 (2020-08-16)", "content": "Datasette documentation now lives at docs.datasette.io . \n \n \n db.is_mutable property is now documented and tested, see Database introspection . \n \n \n The extra_template_vars , extra_css_urls , extra_js_urls and extra_body_script plugin hooks now all accept the same arguments. See extra_template_vars(template, database, table, columns, view_name, request, datasette) for details. ( #939 ) \n \n \n Those hooks now accept a new columns argument detailing the table columns that will be rendered on that page. ( #938 ) \n \n \n Fixed bug where plugins calling db.execute_write_fn() could hang Datasette if the connection failed. ( #935 ) \n \n \n Fixed bug with the ?_nl=on output option and binary data. ( #914 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://docs.datasette.io/\", \"label\": \"docs.datasette.io\"}, {\"href\": \"https://github.com/simonw/datasette/issues/939\", \"label\": \"#939\"}, {\"href\": \"https://github.com/simonw/datasette/issues/938\", \"label\": \"#938\"}, {\"href\": \"https://github.com/simonw/datasette/issues/935\", \"label\": \"#935\"}, {\"href\": \"https://github.com/simonw/datasette/issues/914\", \"label\": \"#914\"}]"} {"id": "changelog:id174", "page": "changelog", "ref": "id174", "title": "0.14 (2017-12-09)", "content": "The theme of this release is customization: Datasette now allows every aspect\n of its presentation to be customized \n either using additional CSS or by providing entirely new templates. \n Datasette's metadata.json format \n has also been expanded, to allow per-database and per-table metadata. A new\n datasette skeleton command can be used to generate a skeleton JSON file\n ready to be filled in with per-database and per-table details. \n The metadata.json file can also be used to define\n canned queries ,\n as a more powerful alternative to SQL views. \n \n \n extra_css_urls / extra_js_urls in metadata \n A mechanism in the metadata.json format for adding custom CSS and JS urls. \n Create a metadata.json file that looks like this: \n {\n \"extra_css_urls\": [\n \"https://simonwillison.net/static/css/all.bf8cd891642c.css\"\n ],\n \"extra_js_urls\": [\n \"https://code.jquery.com/jquery-3.2.1.slim.min.js\"\n ]\n} \n Then start datasette like this: \n datasette mydb.db --metadata=metadata.json \n The CSS and JavaScript files will be linked in the <head> of every page. \n You can also specify a SRI (subresource integrity hash) for these assets: \n {\n \"extra_css_urls\": [\n {\n \"url\": \"https://simonwillison.net/static/css/all.bf8cd891642c.css\",\n \"sri\": \"sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI\"\n }\n ],\n \"extra_js_urls\": [\n {\n \"url\": \"https://code.jquery.com/jquery-3.2.1.slim.min.js\",\n \"sri\": \"sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=\"\n }\n ]\n} \n Modern browsers will only execute the stylesheet or JavaScript if the SRI hash\n matches the content served. You can generate hashes using https://www.srihash.org/ \n \n \n Auto-link column values that look like URLs ( #153 ) \n \n \n CSS styling hooks as classes on the body ( #153 ) \n Every template now gets CSS classes in the body designed to support custom\n styling. \n The index template (the top level page at / ) gets this: \n <body class=\"index\"> \n The database template ( /dbname/ ) gets this: \n <body class=\"db db-dbname\"> \n The table template ( /dbname/tablename ) gets: \n <body class=\"table db-dbname table-tablename\"> \n The row template ( /dbname/tablename/rowid ) gets: \n <body class=\"row db-dbname table-tablename\"> \n The db-x and table-x classes use the database or table names themselves IF\n they are valid CSS identifiers. If they aren't, we strip any invalid\n characters out and append a 6 character md5 digest of the original name, in\n order to ensure that multiple tables which resolve to the same stripped\n character version still have different CSS classes. \n Some examples (extracted from the unit tests): \n \"simple\" => \"simple\"\n\"MixedCase\" => \"MixedCase\"\n\"-no-leading-hyphens\" => \"no-leading-hyphens-65bea6\"\n\"_no-leading-underscores\" => \"no-leading-underscores-b921bc\"\n\"no spaces\" => \"no-spaces-7088d7\"\n\"-\" => \"336d5e\"\n\"no $ characters\" => \"no--characters-59e024\" \n \n \n datasette --template-dir=mytemplates/ argument \n You can now pass an additional argument specifying a directory to look for\n custom templates in. \n Datasette will fall back on the default templates if a template is not\n found in that directory. \n \n \n Ability to over-ride templates for individual tables/databases. \n It is now possible to over-ride templates on a per-database / per-row or per-\n table basis. \n When you access e.g. /mydatabase/mytable Datasette will look for the following: \n - table-mydatabase-mytable.html\n- table.html \n If you provided a --template-dir argument to datasette serve it will look in\n that directory first. \n The lookup rules are as follows: \n Index page (/):\n index.html\n\nDatabase page (/mydatabase):\n database-mydatabase.html\n database.html\n\nTable page (/mydatabase/mytable):\n table-mydatabase-mytable.html\n table.html\n\nRow page (/mydatabase/mytable/id):\n row-mydatabase-mytable.html\n row.html \n If a table name has spaces or other unexpected characters in it, the template\n filename will follow the same rules as our custom <body> CSS classes\n - for example, a table called \"Food Trucks\"\n will attempt to load the following templates: \n table-mydatabase-Food-Trucks-399138.html\ntable.html \n It is possible to extend the default templates using Jinja template\n inheritance. If you want to customize EVERY row template with some additional\n content you can do so by creating a row.html template like this: \n {% extends \"default:row.html\" %}\n\n{% block content %}\n<h1>EXTRA HTML AT THE TOP OF THE CONTENT BLOCK</h1>\n<p>This line renders the original block:</p>\n{{ super() }}\n{% endblock %} \n \n \n --static option for datasette serve ( #160 ) \n You can now tell Datasette to serve static files from a specific location at a\n specific mountpoint. \n For example: \n datasette serve mydb.db --static extra-css:/tmp/static/css \n Now if you visit this URL: \n http://localhost:8001/extra-css/blah.css \n The following file will be served: \n /tmp/static/css/blah.css \n \n \n Canned query support. \n Named canned queries can now be defined in metadata.json like this: \n {\n \"databases\": {\n \"timezones\": {\n \"queries\": {\n \"timezone_for_point\": \"select tzid from timezones ...\"\n }\n }\n }\n} \n These will be shown in a new \"Queries\" section beneath \"Views\" on the database page. \n \n \n New datasette skeleton command for generating metadata.json ( #164 ) \n \n \n metadata.json support for per-table/per-database metadata ( #165 ) \n Also added support for descriptions and HTML descriptions. \n Here's an example metadata.json file illustrating custom per-database and per-\n table metadata: \n {\n \"title\": \"Overall datasette title\",\n \"description_html\": \"This is a <em>description with HTML</em>.\",\n \"databases\": {\n \"db1\": {\n \"title\": \"First database\",\n \"description\": \"This is a string description & has no HTML\",\n \"license_url\": \"http://example.com/\",\n \"license\": \"The example license\",\n \"queries\": {\n \"canned_query\": \"select * from table1 limit 3;\"\n },\n \"tables\": {\n \"table1\": {\n \"title\": \"Custom title for table1\",\n \"description\": \"Tables can have descriptions too\",\n \"source\": \"This has a custom source\",\n \"source_url\": \"http://example.com/\"\n }\n }\n }\n }\n} \n \n \n Renamed datasette build command to datasette inspect ( #130 ) \n \n \n Upgrade to Sanic 0.7.0 ( #168 ) \n https://github.com/channelcat/sanic/releases/tag/0.7.0 \n \n \n Package and publish commands now accept --static and --template-dir \n Example usage: \n datasette package --static css:extra-css/ --static js:extra-js/ \\\n sf-trees.db --template-dir templates/ --tag sf-trees --branch master \n This creates a local Docker image that includes copies of the templates/,\n extra-css/ and extra-js/ directories. You can then run it like this: \n docker run -p 8001:8001 sf-trees \n For publishing to Zeit now: \n datasette publish now --static css:extra-css/ --static js:extra-js/ \\\n sf-trees.db --template-dir templates/ --name sf-trees --branch master \n \n \n HTML comment showing which templates were considered for a page ( #171 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://docs.datasette.io/en/stable/custom_templates.html\", \"label\": \"to be customized\"}, {\"href\": \"https://docs.datasette.io/en/stable/metadata.html\", \"label\": \"metadata.json format\"}, {\"href\": \"https://docs.datasette.io/en/stable/sql_queries.html#canned-queries\", \"label\": \"canned queries\"}, {\"href\": \"https://www.srihash.org/\", \"label\": \"https://www.srihash.org/\"}, {\"href\": \"https://github.com/simonw/datasette/issues/153\", \"label\": \"#153\"}, {\"href\": \"https://github.com/simonw/datasette/issues/153\", \"label\": \"#153\"}, {\"href\": \"https://github.com/simonw/datasette/issues/160\", \"label\": \"#160\"}, {\"href\": \"https://github.com/simonw/datasette/issues/164\", \"label\": \"#164\"}, {\"href\": \"https://github.com/simonw/datasette/issues/165\", \"label\": \"#165\"}, {\"href\": \"https://github.com/simonw/datasette/issues/130\", \"label\": \"#130\"}, {\"href\": \"https://github.com/simonw/datasette/issues/168\", \"label\": \"#168\"}, {\"href\": \"https://github.com/channelcat/sanic/releases/tag/0.7.0\", \"label\": \"https://github.com/channelcat/sanic/releases/tag/0.7.0\"}, {\"href\": \"https://github.com/simonw/datasette/issues/171\", \"label\": \"#171\"}]"} {"id": "changelog:id152", "page": "changelog", "ref": "id152", "title": "0.18 (2018-04-14)", "content": "This release introduces support for units ,\n contributed by Russ Garrett ( #203 ).\n You can now optionally specify the units for specific columns using metadata.json .\n Once specified, units will be displayed in the HTML view of your table. They also become\n available for use in filters - if a column is configured with a unit of distance, you can\n request all rows where that column is less than 50 meters or more than 20 feet for example. \n \n \n Link foreign keys which don't have labels. [Russ Garrett] \n This renders unlabeled FKs as simple links. \n Also includes bonus fixes for two minor issues: \n \n \n In foreign key link hrefs the primary key was escaped using HTML\n escaping rather than URL escaping. This broke some non-integer PKs. \n \n \n Print tracebacks to console when handling 500 errors. \n \n \n \n \n Fix SQLite error when loading rows with no incoming FKs. [Russ\n Garrett] \n This fixes an error caused by an invalid query when loading incoming FKs. \n The error was ignored due to async but it still got printed to the\n console. \n \n \n Allow custom units to be registered with Pint. [Russ Garrett] \n \n \n Support units in filters. [Russ Garrett] \n \n \n Tidy up units support. [Russ Garrett] \n \n \n Add units to exported JSON \n \n \n Units key in metadata skeleton \n \n \n Docs \n \n \n \n \n Initial units support. [Russ Garrett] \n Add support for specifying units for a column in metadata.json and\n rendering them on display using\n pint", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://docs.datasette.io/en/stable/metadata.html#specifying-units-for-a-column\", \"label\": \"support for units\"}, {\"href\": \"https://github.com/simonw/datasette/issues/203\", \"label\": \"#203\"}, {\"href\": \"https://pint.readthedocs.io/en/latest/\", \"label\": \"pint\"}]"} {"id": "changelog:id145", "page": "changelog", "ref": "id145", "title": "0.19 (2018-04-16)", "content": "This is the first preview of the new Datasette plugins mechanism. Only two\n plugin hooks are available so far - for custom SQL functions and custom template\n filters. There's plenty more to come - read the documentation and get involved in\n the tracking ticket if you\n have feedback on the direction so far. \n \n \n Fix for _sort_desc=sortable_with_nulls test, refs #216 \n \n \n Fixed #216 - paginate correctly when sorting by nullable column \n \n \n Initial documentation for plugins, closes #213 \n https://docs.datasette.io/en/stable/plugins.html \n \n \n New --plugins-dir=plugins/ option ( #212 ) \n New option causing Datasette to load and evaluate all of the Python files in\n the specified directory and register any plugins that are defined in those\n files. \n This new option is available for the following commands: \n datasette serve mydb.db --plugins-dir=plugins/\ndatasette publish now/heroku mydb.db --plugins-dir=plugins/\ndatasette package mydb.db --plugins-dir=plugins/ \n \n \n Start of the plugin system, based on pluggy ( #210 ) \n Uses https://pluggy.readthedocs.io/ originally created for the py.test project \n We're starting with two plugin hooks: \n prepare_connection(conn) \n This is called when a new SQLite connection is created. It can be used to register custom SQL functions. \n prepare_jinja2_environment(env) \n This is called with the Jinja2 environment. It can be used to register custom template tags and filters. \n An example plugin which uses these two hooks can be found at https://github.com/simonw/datasette-plugin-demos or installed using pip install datasette-plugin-demos \n Refs #14 \n \n \n Return HTTP 405 on InvalidUsage rather than 500. [Russ Garrett] \n This also stops it filling up the logs. This happens for HEAD requests\n at the moment - which perhaps should be handled better, but that's a\n different issue.", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://docs.datasette.io/en/stable/plugins.html\", \"label\": \"the documentation\"}, {\"href\": \"https://github.com/simonw/datasette/issues/14\", \"label\": \"the tracking ticket\"}, {\"href\": \"https://github.com/simonw/datasette/issues/216\", \"label\": \"#216\"}, {\"href\": \"https://github.com/simonw/datasette/issues/216\", \"label\": \"#216\"}, {\"href\": \"https://github.com/simonw/datasette/issues/213\", \"label\": \"#213\"}, {\"href\": \"https://docs.datasette.io/en/stable/plugins.html\", \"label\": \"https://docs.datasette.io/en/stable/plugins.html\"}, {\"href\": \"https://github.com/simonw/datasette/issues/212\", \"label\": \"#212\"}, {\"href\": \"https://github.com/simonw/datasette/issues/14\", \"label\": \"#210\"}, {\"href\": \"https://pluggy.readthedocs.io/\", \"label\": \"https://pluggy.readthedocs.io/\"}, {\"href\": \"https://github.com/simonw/datasette-plugin-demos\", \"label\": \"https://github.com/simonw/datasette-plugin-demos\"}, {\"href\": \"https://github.com/simonw/datasette/issues/14\", \"label\": \"#14\"}]"} {"id": "contributing:contributing-running-tests", "page": "contributing", "ref": "contributing-running-tests", "title": "Running the tests", "content": "Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so: \n pytest \n You can run the tests faster using multiple CPU cores with pytest-xdist like this: \n pytest -n auto -m \"not serial\" \n -n auto detects the number of available cores automatically. The -m \"not serial\" skips tests that don't work well in a parallel test environment. You can run those tests separately like so: \n pytest -m \"serial\"", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://pypi.org/project/pytest-xdist/\", \"label\": \"pytest-xdist\"}]"} {"id": "testing_plugins:id1", "page": "testing_plugins", "ref": "id1", "title": "Testing plugins", "content": "We recommend using pytest to write automated tests for your plugins. \n If you use the template described in Starting an installable plugin using cookiecutter your plugin will start with a single test in your tests/ directory that looks like this: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200\n installed_plugins = {p[\"name\"] for p in response.json()}\n assert (\n \"datasette-plugin-template-demo\"\n in installed_plugins\n ) \n This test uses the datasette.client object to exercise a test instance of Datasette. datasette.client is a wrapper around the HTTPX Python library which can imitate HTTP requests using ASGI. This is the recommended way to write tests against a Datasette instance. \n This test also uses the pytest-asyncio package to add support for async def test functions running under pytest. \n You can install these packages like so: \n pip install pytest pytest-asyncio \n If you are building an installable package you can add them as test dependencies to your setup.py module like this: \n setup(\n name=\"datasette-my-plugin\",\n # ...\n extras_require={\"test\": [\"pytest\", \"pytest-asyncio\"]},\n tests_require=[\"datasette-my-plugin[test]\"],\n) \n You can then install the test dependencies like so: \n pip install -e '.[test]' \n Then run the tests using pytest like so: \n pytest", "breadcrumbs": "[]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX\"}, {\"href\": \"https://pypi.org/project/pytest-asyncio/\", \"label\": \"pytest-asyncio\"}]"} {"id": "testing_plugins:testing-plugins-fixtures", "page": "testing_plugins", "ref": "testing-plugins-fixtures", "title": "Using pytest fixtures", "content": "Pytest fixtures can be used to create initial testable objects which can then be used by multiple tests. \n A common pattern for Datasette plugins is to create a fixture which sets up a temporary test database and wraps it in a Datasette instance. \n Here's an example that uses the sqlite-utils library to populate a temporary test database. It also sets the title of that table using a simulated metadata.json configuration: \n from datasette.app import Datasette\nimport pytest\nimport sqlite_utils\n\n\n@pytest.fixture(scope=\"session\")\ndef datasette(tmp_path_factory):\n db_directory = tmp_path_factory.mktemp(\"dbs\")\n db_path = db_directory / \"test.db\"\n db = sqlite_utils.Database(db_path)\n db[\"dogs\"].insert_all(\n [\n {\"id\": 1, \"name\": \"Cleo\", \"age\": 5},\n {\"id\": 2, \"name\": \"Pancakes\", \"age\": 4},\n ],\n pk=\"id\",\n )\n datasette = Datasette(\n [db_path],\n metadata={\n \"databases\": {\n \"test\": {\n \"tables\": {\n \"dogs\": {\"title\": \"Some dogs\"}\n }\n }\n }\n },\n )\n return datasette\n\n\n@pytest.mark.asyncio\nasync def test_example_table_json(datasette):\n response = await datasette.client.get(\n \"/test/dogs.json?_shape=array\"\n )\n assert response.status_code == 200\n assert response.json() == [\n {\"id\": 1, \"name\": \"Cleo\", \"age\": 5},\n {\"id\": 2, \"name\": \"Pancakes\", \"age\": 4},\n ]\n\n\n@pytest.mark.asyncio\nasync def test_example_table_html(datasette):\n response = await datasette.client.get(\"/test/dogs\")\n assert \">Some dogs</h1>\" in response.text \n Here the datasette() function defines the fixture, which is than automatically passed to the two test functions based on pytest automatically matching their datasette function parameters. \n The @pytest.fixture(scope=\"session\") line here ensures the fixture is reused for the full pytest execution session. This means that the temporary database file will be created once and reused for each test. \n If you want to create that test database repeatedly for every individual test function, write the fixture function like this instead. You may want to do this if your plugin modifies the database contents in some way: \n @pytest.fixture\ndef datasette(tmp_path_factory):\n # This fixture will be executed repeatedly for every test\n ...", "breadcrumbs": "[\"Testing plugins\"]", "references": "[{\"href\": \"https://docs.pytest.org/en/stable/fixture.html\", \"label\": \"Pytest fixtures\"}, {\"href\": \"https://sqlite-utils.datasette.io/en/stable/python-api.html\", \"label\": \"sqlite-utils library\"}]"} {"id": "contributing:devenvironment", "page": "contributing", "ref": "devenvironment", "title": "Setting up a development environment", "content": "If you have Python 3.8 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. \n If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. \n Now clone that repository somewhere on your computer: \n git clone git@github.com:YOURNAME/datasette \n If you want to get started without creating your own fork, you can do this instead: \n git clone git@github.com:simonw/datasette \n The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: \n cd datasette\n# Create a virtual environment in ./venv\npython3 -m venv ./venv\n# Now activate the virtual environment, so pip can install into it\nsource venv/bin/activate\n# Install Datasette and its testing dependencies\npython3 -m pip install -e '.[test]' \n That last line does most of the work: pip install -e means \"install this package in a way that allows me to edit the source code in place\". The .[test] option means \"use the setup.py in this directory and install the optional testing dependencies as well\".", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://docs.python-guide.org/starting/install3/osx/\", \"label\": \"is using homebrew\"}, {\"href\": \"https://github.com/simonw/datasette/fork\", \"label\": \"create a fork of datasette\"}]"} {"id": "plugin_hooks:plugin-hook-prepare-connection", "page": "plugin_hooks", "ref": "plugin-hook-prepare-connection", "title": "prepare_connection(conn, database, datasette)", "content": "conn - sqlite3 connection object \n \n The connection that is being opened \n \n \n \n database - string \n \n The name of the database \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) \n \n \n \n This hook is called when a new SQLite database connection is created. You can\n use it to register custom SQL functions ,\n aggregates and collations. For example: \n from datasette import hookimpl\nimport random\n\n\n@hookimpl\ndef prepare_connection(conn):\n conn.create_function(\n \"random_integer\", 2, random.randint\n ) \n This registers a SQL function called random_integer which takes two\n arguments and can be called like this: \n select random_integer(1, 10); \n Examples: datasette-jellyfish , datasette-jq , datasette-haversine , datasette-rure", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.create_function\", \"label\": \"register custom SQL functions\"}, {\"href\": \"https://datasette.io/plugins/datasette-jellyfish\", \"label\": \"datasette-jellyfish\"}, {\"href\": \"https://datasette.io/plugins/datasette-jq\", \"label\": \"datasette-jq\"}, {\"href\": \"https://datasette.io/plugins/datasette-haversine\", \"label\": \"datasette-haversine\"}, {\"href\": \"https://datasette.io/plugins/datasette-rure\", \"label\": \"datasette-rure\"}]"} {"id": "internals:database-results", "page": "internals", "ref": "database-results", "title": "Results", "content": "The db.execute() method returns a single Results object. This can be used to access the rows returned by the query. \n Iterating over a Results object will yield SQLite Row objects . Each of these can be treated as a tuple or can be accessed using row[\"column\"] syntax: \n info = []\nresults = await db.execute(\"select name from sqlite_master\")\nfor row in results:\n info.append(row[\"name\"]) \n The Results object also has the following properties and methods: \n \n \n .truncated - boolean \n \n Indicates if this query was truncated - if it returned more results than the specified page_size . If this is true then the results object will only provide access to the first page_size rows in the query result. You can disable truncation by passing truncate=False to the db.query() method. \n \n \n \n .columns - list of strings \n \n A list of column names returned by the query. \n \n \n \n .rows - list of sqlite3.Row \n \n This property provides direct access to the list of rows returned by the database. You can access specific rows by index using results.rows[0] . \n \n \n \n .first() - row or None \n \n Returns the first row in the results, or None if no rows were returned. \n \n \n \n .single_value() \n \n Returns the value of the first column of the first row of results - but only if the query returned a single row with a single column. Raises a datasette.database.MultipleValues exception otherwise. \n \n \n \n .__len__() \n \n Calling len(results) returns the (truncated) number of returned results.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#row-objects\", \"label\": \"Row objects\"}]"} {"id": "internals:database-execute-write-many", "page": "internals", "ref": "database-execute-write-many", "title": "await db.execute_write_many(sql, params_seq, block=True)", "content": "Like execute_write() but uses the sqlite3 conn.executemany() method. This will efficiently execute the same SQL statement against each of the parameters in the params_seq iterator, for example: \n await db.execute_write_many(\n \"insert into characters (id, name) values (?, ?)\",\n [(1, \"Melanie\"), (2, \"Selma\"), (2, \"Viktor\")],\n) \n Each call to execute_write_many() will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany\", \"label\": \"conn.executemany()\"}]"} {"id": "internals:database-execute-write-script", "page": "internals", "ref": "database-execute-write-script", "title": "await db.execute_write_script(sql, block=True)", "content": "Like execute_write() but can be used to send multiple SQL statements in a single string separated by semicolons, using the sqlite3 conn.executescript() method. \n Each call to execute_write_script() will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executescript\", \"label\": \"conn.executescript()\"}]"} {"id": "ecosystem:dogsheep", "page": "ecosystem", "ref": "dogsheep", "title": "Dogsheep", "content": "Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. The project provides tools like github-to-sqlite and twitter-to-sqlite that can import data from different sources in order to create a personal data warehouse. Personal Data Warehouses: Reclaiming Your Data is a talk that explains Dogsheep and demonstrates it in action.", "breadcrumbs": "[\"The Datasette Ecosystem\"]", "references": "[{\"href\": \"https://dogsheep.github.io/\", \"label\": \"Dogsheep\"}, {\"href\": \"https://datasette.io/tools/github-to-sqlite\", \"label\": \"github-to-sqlite\"}, {\"href\": \"https://datasette.io/tools/twitter-to-sqlite\", \"label\": \"twitter-to-sqlite\"}, {\"href\": \"https://simonwillison.net/2020/Nov/14/personal-data-warehouses/\", \"label\": \"Personal Data Warehouses: Reclaiming Your Data\"}]"} {"id": "spatialite:importing-shapefiles-into-spatialite", "page": "spatialite", "ref": "importing-shapefiles-into-spatialite", "title": "Importing shapefiles into SpatiaLite", "content": "The shapefile format is a common format for distributing geospatial data. You can use the spatialite command-line tool to create a new database table from a shapefile. \n Try it now with the North America shapefile available from the University of North Carolina Global River Database project. Download the file and unzip it (this will create files called narivs.dbf , narivs.prj , narivs.shp and narivs.shx in the current directory), then run the following: \n spatialite rivers-database.db \n SpatiaLite version ..: 4.3.0a Supported Extensions:\n...\nspatialite> .loadshp narivs rivers CP1252 23032\n========\nLoading shapefile at 'narivs' into SQLite table 'rivers'\n...\nInserted 467973 rows into 'rivers' from SHAPEFILE \n This will load the data from the narivs shapefile into a new database table called rivers . \n Exit out of spatialite (using Ctrl+D ) and run Datasette against your new database like this: \n datasette rivers-database.db \\\n --load-extension=/usr/local/lib/mod_spatialite.dylib \n If you browse to http://localhost:8001/rivers-database/rivers you will see the new table... but the Geometry column will contain unreadable binary data (SpatiaLite uses a custom format based on WKB ). \n The easiest way to turn this into semi-readable data is to use the SpatiaLite AsGeoJSON function. Try the following using the SQL query interface at http://localhost:8001/rivers-database : \n select *, AsGeoJSON(Geometry) from rivers limit 10; \n This will give you back an additional column of GeoJSON. You can copy and paste GeoJSON from this column into the debugging tool at geojson.io to visualize it on a map. \n To see a more interesting example, try ordering the records with the longest geometry first. Since there are 467,000 rows in the table you will first need to increase the SQL time limit imposed by Datasette: \n datasette rivers-database.db \\\n --load-extension=/usr/local/lib/mod_spatialite.dylib \\\n --setting sql_time_limit_ms 10000 \n Now try the following query: \n select *, AsGeoJSON(Geometry) from rivers\norder by length(Geometry) desc limit 10;", "breadcrumbs": "[\"SpatiaLite\"]", "references": "[{\"href\": \"https://en.wikipedia.org/wiki/Shapefile\", \"label\": \"shapefile format\"}, {\"href\": \"http://gaia.geosci.unc.edu/rivers/\", \"label\": \"Global River Database\"}, {\"href\": \"https://www.gaia-gis.it/gaia-sins/BLOB-Geometry.html\", \"label\": \"a custom format based on WKB\"}, {\"href\": \"https://geojson.io/\", \"label\": \"geojson.io\"}]"} {"id": "full_text_search:full-text-search-table-view-api", "page": "full_text_search", "ref": "full-text-search-table-view-api", "title": "The table page and table view API", "content": "Table views that support full-text search can be queried using the ?_search=TERMS query string parameter. This will run the search against content from all of the columns that have been included in the index. \n Try this example: fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort \n SQLite full-text search supports wildcards. This means you can easily implement prefix auto-complete by including an asterisk at the end of the search term - for example: \n /dbname/tablename/?_search=rob* \n This will return all records containing at least one word that starts with the letters rob . \n You can also run searches against just the content of a specific named column by using _search_COLNAME=TERMS - for example, this would search for just rows where the name column in the FTS index mentions Sarah : \n /dbname/tablename/?_search_name=Sarah", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\"}]"} {"id": "full_text_search:full-text-search-custom-sql", "page": "full_text_search", "ref": "full-text-search-custom-sql", "title": "Searches using custom SQL", "content": "You can include full-text search results in custom SQL queries. The general pattern with SQLite search is to run the search as a sub-select that returns rowid values, then include those rowids in another part of the query. \n You can see the syntax for a basic search by running that search on a table page and then clicking \"View and edit SQL\" to see the underlying SQL. For example, consider this search for manafort is the US FARA database : \n /fara/FARA_All_ShortForms?_search=manafort \n If you click View and edit SQL you'll see that the underlying SQL looks like this: \n select\n rowid,\n Short_Form_Termination_Date,\n Short_Form_Date,\n Short_Form_Last_Name,\n Short_Form_First_Name,\n Registration_Number,\n Registration_Date,\n Registrant_Name,\n Address_1,\n Address_2,\n City,\n State,\n Zip\nfrom\n FARA_All_ShortForms\nwhere\n rowid in (\n select\n rowid\n from\n FARA_All_ShortForms_fts\n where\n FARA_All_ShortForms_fts match escape_fts(:search)\n )\norder by\n rowid\nlimit\n 101", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"manafort is the US FARA database\"}, {\"href\": \"https://fara.datasettes.com/fara?sql=select%0D%0A++rowid%2C%0D%0A++Short_Form_Termination_Date%2C%0D%0A++Short_Form_Date%2C%0D%0A++Short_Form_Last_Name%2C%0D%0A++Short_Form_First_Name%2C%0D%0A++Registration_Number%2C%0D%0A++Registration_Date%2C%0D%0A++Registrant_Name%2C%0D%0A++Address_1%2C%0D%0A++Address_2%2C%0D%0A++City%2C%0D%0A++State%2C%0D%0A++Zip%0D%0Afrom%0D%0A++FARA_All_ShortForms%0D%0Awhere%0D%0A++rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++FARA_All_ShortForms_fts%0D%0A++++where%0D%0A++++++FARA_All_ShortForms_fts+match+escape_fts%28%3Asearch%29%0D%0A++%29%0D%0Aorder+by%0D%0A++rowid%0D%0Alimit%0D%0A++101&search=manafort\", \"label\": \"View and edit SQL\"}]"} {"id": "pages:indexview", "page": "pages", "ref": "indexview", "title": "Top-level index", "content": "The root page of any Datasette installation is an index page that lists all of the currently attached databases. Some examples: \n \n \n fivethirtyeight.datasettes.com \n \n \n global-power-plants.datasettes.com \n \n \n register-of-members-interests.datasettes.com \n \n \n Add /.json to the end of the URL for the JSON version of the underlying data: \n \n \n fivethirtyeight.datasettes.com/.json \n \n \n global-power-plants.datasettes.com/.json \n \n \n register-of-members-interests.datasettes.com/.json", "breadcrumbs": "[\"Pages and API endpoints\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/\", \"label\": \"fivethirtyeight.datasettes.com\"}, {\"href\": \"https://global-power-plants.datasettes.com/\", \"label\": \"global-power-plants.datasettes.com\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/\", \"label\": \"register-of-members-interests.datasettes.com\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/.json\", \"label\": \"fivethirtyeight.datasettes.com/.json\"}, {\"href\": \"https://global-power-plants.datasettes.com/.json\", \"label\": \"global-power-plants.datasettes.com/.json\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/.json\", \"label\": \"register-of-members-interests.datasettes.com/.json\"}]"} {"id": "introspection:jsondataview-metadata", "page": "introspection", "ref": "jsondataview-metadata", "title": "/-/metadata", "content": "Shows the contents of the metadata.json file that was passed to datasette serve , if any. Metadata example : \n {\n \"license\": \"CC Attribution 4.0 License\",\n \"license_url\": \"http://creativecommons.org/licenses/by/4.0/\",\n \"source\": \"fivethirtyeight/data on GitHub\",\n \"source_url\": \"https://github.com/fivethirtyeight/data\",\n \"title\": \"Five Thirty Eight\",\n \"databases\": {\n\n }\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/metadata\", \"label\": \"Metadata example\"}]"} {"id": "plugins:plugins-installed", "page": "plugins", "ref": "plugins-installed", "title": "Seeing what plugins are installed", "content": "You can see a list of installed plugins by navigating to the /-/plugins page of your Datasette instance - for example: https://fivethirtyeight.datasettes.com/-/plugins \n You can also use the datasette plugins command: \n datasette plugins \n Which outputs: \n [\n {\n \"name\": \"datasette_json_html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.4.0\"\n }\n] \n [[[cog\nfrom datasette import cli\nfrom click.testing import CliRunner\nimport textwrap, json\ncog.out(\"\\n\")\nresult = CliRunner().invoke(cli.cli, [\"plugins\", \"--all\"])\n# cog.out() with text containing newlines was unindenting for some reason\ncog.outl(\"If you run ``datasette plugins --all`` it will include default plugins that ship as part of Datasette:\\n\")\ncog.outl(\".. code-block:: json\\n\")\nplugins = [p for p in json.loads(result.output) if p[\"name\"].startswith(\"datasette.\")]\nindented = textwrap.indent(json.dumps(plugins, indent=4), \" \")\nfor line in indented.split(\"\\n\"):\n cog.outl(line)\ncog.out(\"\\n\\n\") \n ]]] \n If you run datasette plugins --all it will include default plugins that ship as part of Datasette: \n [\n {\n \"name\": \"datasette.actor_auth_cookie\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"actor_from_request\"\n ]\n },\n {\n \"name\": \"datasette.blob_renderer\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_output_renderer\"\n ]\n },\n {\n \"name\": \"datasette.default_magic_parameters\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_magic_parameters\"\n ]\n },\n {\n \"name\": \"datasette.default_menu_links\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"menu_links\"\n ]\n },\n {\n \"name\": \"datasette.default_permissions\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"actor_from_request\",\n \"permission_allowed\",\n \"register_permissions\",\n \"skip_csrf\"\n ]\n },\n {\n \"name\": \"datasette.events\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_events\"\n ]\n },\n {\n \"name\": \"datasette.facets\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_facet_classes\"\n ]\n },\n {\n \"name\": \"datasette.filters\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"filters_from_request\"\n ]\n },\n {\n \"name\": \"datasette.forbidden\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"forbidden\"\n ]\n },\n {\n \"name\": \"datasette.handle_exception\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"handle_exception\"\n ]\n },\n {\n \"name\": \"datasette.publish.cloudrun\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"publish_subcommand\"\n ]\n },\n {\n \"name\": \"datasette.publish.heroku\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"publish_subcommand\"\n ]\n },\n {\n \"name\": \"datasette.sql_functions\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"prepare_connection\"\n ]\n }\n] \n [[[end]]] \n You can add the --plugins-dir= option to include any plugins found in that directory. \n Add --requirements to output a list of installed plugins that can then be installed in another Datasette instance using datasette install -r requirements.txt : \n datasette plugins --requirements \n The output will look something like this: \n datasette-codespaces==0.1.1\ndatasette-graphql==2.2\ndatasette-json-html==1.0.1\ndatasette-pretty-json==0.2.2\ndatasette-x-forwarded-host==0.1 \n To write that to a requirements.txt file, run this: \n datasette plugins --requirements > requirements.txt", "breadcrumbs": "[\"Plugins\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/plugins\", \"label\": \"https://fivethirtyeight.datasettes.com/-/plugins\"}]"} {"id": "introspection:jsondataview-settings", "page": "introspection", "ref": "jsondataview-settings", "title": "/-/settings", "content": "Shows the Settings for this instance of Datasette. Settings example : \n {\n \"default_facet_size\": 30,\n \"default_page_size\": 100,\n \"facet_suggest_time_limit_ms\": 50,\n \"facet_time_limit_ms\": 1000,\n \"max_returned_rows\": 1000,\n \"sql_time_limit_ms\": 1000\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/settings\", \"label\": \"Settings example\"}]"} {"id": "pages:databaseview", "page": "pages", "ref": "databaseview", "title": "Database", "content": "Each database has a page listing the tables, views and canned queries available for that database. If the execute-sql permission is enabled (it's on by default) there will also be an interface for executing arbitrary SQL select queries against the data. \n Examples: \n \n \n fivethirtyeight.datasettes.com/fivethirtyeight \n \n \n global-power-plants.datasettes.com/global-power-plants \n \n \n The JSON version of this page provides programmatic access to the underlying data: \n \n \n fivethirtyeight.datasettes.com/fivethirtyeight.json \n \n \n global-power-plants.datasettes.com/global-power-plants.json", "breadcrumbs": "[\"Pages and API endpoints\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight\", \"label\": \"fivethirtyeight.datasettes.com/fivethirtyeight\"}, {\"href\": \"https://global-power-plants.datasettes.com/global-power-plants\", \"label\": \"global-power-plants.datasettes.com/global-power-plants\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json\", \"label\": \"fivethirtyeight.datasettes.com/fivethirtyeight.json\"}, {\"href\": \"https://global-power-plants.datasettes.com/global-power-plants.json\", \"label\": \"global-power-plants.datasettes.com/global-power-plants.json\"}]"} {"id": "json_api:json-api-special", "page": "json_api", "ref": "json-api-special", "title": "Special JSON arguments", "content": "Every Datasette endpoint that can return JSON also accepts the following\n query string arguments: \n \n \n ?_shape=SHAPE \n \n The shape of the JSON to return, documented above. \n \n \n \n ?_nl=on \n \n When used with ?_shape=array produces newline-delimited JSON objects. \n \n \n \n ?_json=COLUMN1&_json=COLUMN2 \n \n If any of your SQLite columns contain JSON values, you can use one or more\n _json= parameters to request that those columns be returned as regular\n JSON. Without this argument those columns will be returned as JSON objects\n that have been double-encoded into a JSON string value. \n Compare this query without the argument to this query using the argument \n \n \n \n ?_json_infinity=on \n \n If your data contains infinity or -infinity values, Datasette will replace\n them with None when returning them as JSON. If you pass _json_infinity=1 \n Datasette will instead return them as Infinity or -Infinity which is\n invalid JSON but can be processed by some custom JSON parsers. \n \n \n \n ?_timelimit=MS \n \n Sets a custom time limit for the query in ms. You can use this for optimistic\n queries where you would like Datasette to give up if the query takes too\n long, for example if you want to implement autocomplete search but only if\n it can be executed in less than 10ms. \n \n \n \n ?_ttl=SECONDS \n \n For how many seconds should this response be cached by HTTP proxies? Use\n ?_ttl=0 to disable HTTP caching entirely for this request. \n \n \n \n ?_trace=1 \n \n Turns on tracing for this page: SQL queries executed during the request will\n be gathered and included in the response, either in a new \"_traces\" key\n for JSON responses or at the bottom of the page if the response is in HTML. \n The structure of the data returned here should be considered highly unstable\n and very likely to change. \n Only available if the trace_debug setting is enabled.", "breadcrumbs": "[\"JSON API\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json?sql=select+%27{%22this+is%22%3A+%22a+json+object%22}%27+as+d&_shape=array\", \"label\": \"this query without the argument\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight.json?sql=select+%27{%22this+is%22%3A+%22a+json+object%22}%27+as+d&_shape=array&_json=d\", \"label\": \"this query using the argument\"}]"} {"id": "changelog:csv-export", "page": "changelog", "ref": "csv-export", "title": "CSV export", "content": "Any Datasette table, view or custom SQL query can now be exported as CSV. \n \n Check out the CSV export documentation for more details, or\n try the feature out on\n https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies \n If your table has more than max_returned_rows (default 1,000)\n Datasette provides the option to stream all rows . This option takes advantage\n of async Python and Datasette's efficient pagination to\n iterate through the entire matching result set and stream it back as a\n downloadable CSV file.", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies\", \"label\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies\"}]"} {"id": "changelog:control-http-caching-with-ttl", "page": "changelog", "ref": "control-http-caching-with-ttl", "title": "Control HTTP caching with ?_ttl=", "content": "You can now customize the HTTP max-age header that is sent on a per-URL basis, using the new ?_ttl= query string parameter. \n You can set this to any value in seconds, or you can set it to 0 to disable HTTP caching entirely. \n Consider for example this query which returns a randomly selected member of the Avengers: \n select * from [avengers/avengers] order by random() limit 1 \n If you hit the following page repeatedly you will get the same result, due to HTTP caching: \n /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1 \n By adding ?_ttl=0 to the zero you can ensure the page will not be cached and get back a different super hero every time: \n /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1\", \"label\": \"/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0\", \"label\": \"/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0\"}]"}