.*)$\", hello_from)] \n The view functions can take a number of different optional arguments. The corresponding argument will be passed to your function depending on its named parameters - a form of dependency injection. \n The optional view function arguments are as follows: \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n scope - dictionary \n \n The incoming ASGI scope dictionary. \n \n \n \n send - function \n \n The ASGI send function. \n \n \n \n receive - function \n \n The ASGI receive function. \n \n \n \n The view function can be a regular function or an async def function, depending on if it needs to use any await APIs. \n The function can either return a Response class or it can return nothing and instead respond directly to the request using the ASGI send function (for advanced uses only). \n It can also raise the datasette.NotFound exception to return a 404 not found error, or the datasette.Forbidden exception for a 403 forbidden. \n See Designing URLs for your plugin for tips on designing the URL routes used by your plugin. \n Examples: datasette-auth-github , datasette-psutil", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://datasette.io/plugins/datasette-auth-github\", \"label\": \"datasette-auth-github\"}, {\"href\": \"https://datasette.io/plugins/datasette-psutil\", \"label\": \"datasette-psutil\"}]"}
{"id": "plugin_hooks:plugin-hook-jinja2-environment-from-request", "page": "plugin_hooks", "ref": "plugin-hook-jinja2-environment-from-request", "title": "jinja2_environment_from_request(datasette, request, env)", "content": "datasette - Datasette class \n \n A Datasette instance. \n \n \n \n request - Request object or None \n \n The current HTTP request, if one is available. \n \n \n \n env - Environment \n \n The Jinja2 environment that will be used to render the current page. \n \n \n \n This hook can be used to return a customized Jinja environment based on the incoming request. \n If you want to run a single Datasette instance that serves different content for different domains, you can do so like this: \n from datasette import hookimpl\nfrom jinja2 import ChoiceLoader, FileSystemLoader\n\n\n@hookimpl\ndef jinja2_environment_from_request(request, env):\n if request and request.host == \"www.niche-museums.com\":\n return env.overlay(\n loader=ChoiceLoader(\n [\n FileSystemLoader(\n \"/mnt/niche-museums/templates\"\n ),\n env.loader,\n ]\n ),\n enable_async=True,\n )\n return env \n This uses the Jinja overlay() method to create a new environment identical to the default environment except for having a different template loader, which first looks in the /mnt/niche-museums/templates directory before falling back on the default loader.", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://jinja.palletsprojects.com/en/3.0.x/api/#jinja2.Environment\", \"label\": \"Jinja environment\"}, {\"href\": \"https://jinja.palletsprojects.com/en/3.0.x/api/#jinja2.Environment.overlay\", \"label\": \"overlay() method\"}]"}
{"id": "plugin_hooks:plugin-hook-prepare-connection", "page": "plugin_hooks", "ref": "plugin-hook-prepare-connection", "title": "prepare_connection(conn, database, datasette)", "content": "conn - sqlite3 connection object \n \n The connection that is being opened \n \n \n \n database - string \n \n The name of the database \n \n \n \n datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) \n \n \n \n This hook is called when a new SQLite database connection is created. You can\n use it to register custom SQL functions ,\n aggregates and collations. For example: \n from datasette import hookimpl\nimport random\n\n\n@hookimpl\ndef prepare_connection(conn):\n conn.create_function(\n \"random_integer\", 2, random.randint\n ) \n This registers a SQL function called random_integer which takes two\n arguments and can be called like this: \n select random_integer(1, 10); \n Examples: datasette-jellyfish , datasette-jq , datasette-haversine , datasette-rure", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.create_function\", \"label\": \"register custom SQL functions\"}, {\"href\": \"https://datasette.io/plugins/datasette-jellyfish\", \"label\": \"datasette-jellyfish\"}, {\"href\": \"https://datasette.io/plugins/datasette-jq\", \"label\": \"datasette-jq\"}, {\"href\": \"https://datasette.io/plugins/datasette-haversine\", \"label\": \"datasette-haversine\"}, {\"href\": \"https://datasette.io/plugins/datasette-rure\", \"label\": \"datasette-rure\"}]"}
{"id": "plugin_hooks:plugin-hook-register-commands", "page": "plugin_hooks", "ref": "plugin-hook-register-commands", "title": "register_commands(cli)", "content": "cli - the root Datasette Click command group \n \n Use this to register additional CLI commands \n \n \n \n Register additional CLI commands that can be run using datsette yourcommand ... . This provides a mechanism by which plugins can add new CLI commands to Datasette. \n This example registers a new datasette verify file1.db file2.db command that checks if the provided file paths are valid SQLite databases: \n from datasette import hookimpl\nimport click\nimport sqlite3\n\n\n@hookimpl\ndef register_commands(cli):\n @cli.command()\n @click.argument(\n \"files\", type=click.Path(exists=True), nargs=-1\n )\n def verify(files):\n \"Verify that files can be opened by Datasette\"\n for file in files:\n conn = sqlite3.connect(str(file))\n try:\n conn.execute(\"select * from sqlite_master\")\n except sqlite3.DatabaseError:\n raise click.ClickException(\n \"Invalid database: {}\".format(file)\n ) \n The new command can then be executed like so: \n datasette verify fixtures.db \n Help text (from the docstring for the function plus any defined Click arguments or options) will become available using: \n datasette verify --help \n Plugins can register multiple commands by making multiple calls to the @cli.command() decorator. Consult the Click documentation for full details on how to build a CLI command, including how to define arguments and options. \n Note that register_commands() plugins cannot used with the --plugins-dir mechanism - they need to be installed into the same virtual environment as Datasette using pip install . Provided it has a setup.py file (see Packaging a plugin ) you can run pip install directly against the directory in which you are developing your plugin like so: \n pip install -e path/to/my/datasette-plugin \n Examples: datasette-auth-passwords , datasette-verify", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[{\"href\": \"https://click.palletsprojects.com/en/latest/commands/#callback-invocation\", \"label\": \"Click command group\"}, {\"href\": \"https://click.palletsprojects.com/\", \"label\": \"Click documentation\"}, {\"href\": \"https://datasette.io/plugins/datasette-auth-passwords\", \"label\": \"datasette-auth-passwords\"}, {\"href\": \"https://datasette.io/plugins/datasette-verify\", \"label\": \"datasette-verify\"}]"}
{"id": "internals:datasette-actors-from-ids", "page": "internals", "ref": "datasette-actors-from-ids", "title": "await .actors_from_ids(actor_ids)", "content": "actor_ids - list of strings or integers \n \n A list of actor IDs to look up. \n \n \n \n Returns a dictionary, where the keys are the IDs passed to it and the values are the corresponding actor dictionaries. \n This method is mainly designed to be used with plugins. See the actors_from_ids(datasette, actor_ids) documentation for details. \n If no plugins that implement that hook are installed, the default return value looks like this: \n {\n \"1\": {\"id\": \"1\"},\n \"2\": {\"id\": \"2\"}\n}", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}
{"id": "internals:datasette-create-token", "page": "internals", "ref": "datasette-create-token", "title": ".create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None)", "content": "actor_id - string \n \n The ID of the actor to create a token for. \n \n \n \n expires_after - int, optional \n \n The number of seconds after which the token should expire. \n \n \n \n restrict_all - iterable, optional \n \n A list of actions that this token should be restricted to across all databases and resources. \n \n \n \n restrict_database - dict, optional \n \n For restricting actions within specific databases, e.g. {\"mydb\": [\"view-table\", \"view-query\"]} . \n \n \n \n restrict_resource - dict, optional \n \n For restricting actions to specific resources (tables, SQL views and Canned queries ) within a database. For example: {\"mydb\": {\"mytable\": [\"insert-row\", \"update-row\"]}} . \n \n \n \n This method returns a signed API token of the format dstok_... which can be used to authenticate requests to the Datasette API. \n All tokens must have an actor_id string indicating the ID of the actor which the token will act on behalf of. \n Tokens default to lasting forever, but can be set to expire after a given number of seconds using the expires_after argument. The following code creates a token for user1 that will expire after an hour: \n token = datasette.create_token(\n actor_id=\"user1\",\n expires_after=3600,\n) \n The three restrict_* arguments can be used to create a token that has additional restrictions beyond what the associated actor is allowed to do. \n The following example creates a token that can access view-instance and view-table across everything, can additionally use view-query for anything in the docs database and is allowed to execute insert-row and update-row in the attachments table in that database: \n token = datasette.create_token(\n actor_id=\"user1\",\n restrict_all=(\"view-instance\", \"view-table\"),\n restrict_database={\"docs\": (\"view-query\",)},\n restrict_resource={\n \"docs\": {\n \"attachments\": (\"insert-row\", \"update-row\")\n }\n },\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}
{"id": "internals:datasette-ensure-permissions", "page": "internals", "ref": "datasette-ensure-permissions", "title": "await .ensure_permissions(actor, permissions)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n permissions - list \n \n A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) . \n \n \n \n This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted. \n This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False : \n await datasette.ensure_permissions(\n request.actor,\n [\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}
{"id": "internals:datasette-check-visibility", "page": "internals", "ref": "datasette-check-visibility", "title": "await .check_visibility(actor, action=None, resource=None, permissions=None)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string, optional \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n permissions - list of action strings or (action, resource) tuples, optional \n \n Provide this instead of action and resource to check multiple permissions at once. \n \n \n \n This convenience method can be used to answer the question \"should this item be considered private, in that it is visible to me but it is not visible to anonymous users?\" \n It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource. \n This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed: \n visible, private = await datasette.check_visibility(\n request.actor,\n action=\"view-table\",\n resource=(database, table),\n) \n The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be False . private will be True if an anonymous user would not be able to view the resource. \n visible, private = await datasette.check_visibility(\n request.actor,\n permissions=[\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}
{"id": "internals:datasette-permission-allowed", "page": "internals", "ref": "datasette-permission-allowed", "title": "await .permission_allowed(actor, action, resource=None, default=...)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n default - optional: True, False or None \n \n What value should be returned by default if nothing provides an opinion on this permission check.\n Set to True for default allow or False for default deny.\n If not specified the default from the Permission() tuple that was registered using register_permissions(datasette) will be used. \n \n \n \n Check if the given actor has permission to perform the given action on the given resource. \n Some permission checks are carried out against rules defined in datasette.yaml , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook. \n If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned. \n See Built-in permissions for a full list of permission actions included in Datasette core.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}
{"id": "changelog:id72", "page": "changelog", "ref": "id72", "title": "0.34 (2020-01-29)", "content": "_search= queries are now correctly escaped using a new escape_fts() custom SQL function. This means you can now run searches for strings like park. without seeing errors. ( #651 ) \n \n \n Google Cloud Run is no longer in beta, so datasette publish cloudrun has been updated to work even if the user has not installed the gcloud beta components package. Thanks, Katie McLaughlin ( #660 ) \n \n \n datasette package now accepts a --port option for specifying which port the resulting Docker container should listen on. ( #661 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/651\", \"label\": \"#651\"}, {\"href\": \"https://cloud.google.com/run/\", \"label\": \"Google Cloud Run\"}, {\"href\": \"https://github.com/simonw/datasette/pull/660\", \"label\": \"#660\"}, {\"href\": \"https://github.com/simonw/datasette/issues/661\", \"label\": \"#661\"}]"}
{"id": "deploying:deploying-proxy", "page": "deploying", "ref": "deploying-proxy", "title": "Running Datasette behind a proxy", "content": "You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. \n You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: \n datasette my-database.db --setting base_url /my-datasette/ -p 8009 \n This will run Datasette with the following URLs: \n \n \n http://127.0.0.1:8009/my-datasette/ - the Datasette homepage \n \n \n http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database \n \n \n http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table \n \n \n You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"}
{"id": "writing_plugins:id1", "page": "writing_plugins", "ref": "id1", "title": "Writing plugins", "content": "You can write one-off plugins that apply to just one Datasette instance, or you can write plugins which can be installed using pip and can be shipped to the Python Package Index ( PyPI ) for other people to install. \n Want to start by looking at an example? The Datasette plugins directory lists more than 90 open source plugins with code you can explore. The plugin hooks page includes links to example plugins for each of the documented hooks.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pypi.org/\", \"label\": \"PyPI\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Datasette plugins directory\"}]"}
{"id": "custom_templates:custom-pages-redirects", "page": "custom_templates", "ref": "custom-pages-redirects", "title": "Custom redirects", "content": "You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : \n {{ custom_redirect(\"https://github.com/simonw/datasette\") }} \n Now requests to http://localhost:8001/datasette will result in a redirect. \n These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: \n {{ custom_redirect(\"https://github.com/simonw/datasette\", 301) }}", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"}
{"id": "installation:upgrading-packages-using-pipx", "page": "installation", "ref": "upgrading-packages-using-pipx", "title": "Upgrading packages using pipx", "content": "You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : \n pipx upgrade datasette \n upgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) \n To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: \n datasette plugins \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n] \n Now upgrade the plugin: \n pipx runpip datasette install -U datasette-vega-0 \n Collecting datasette-vega\nDownloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 2.0 MB/s\n...\nInstalling collected packages: datasette-vega\nAttempting uninstall: datasette-vega\n Found existing installation: datasette-vega 0.6\n Uninstalling datasette-vega-0.6:\n Successfully uninstalled datasette-vega-0.6\nSuccessfully installed datasette-vega-0.6.2 \n To confirm the upgrade: \n datasette plugins \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"}
{"id": "facets:facets-metadata", "page": "facets", "ref": "facets-metadata", "title": "Facets in metadata", "content": "You can turn facets on by default for specific tables by adding them to a \"facets\" key in a Datasette Metadata file. \n Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"]\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. \n You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: \n [[[cog\nmetadata_example(cog, {\n \"facets\": [\n {\"array\": \"tags\"},\n {\"date\": \"created\"}\n ]\n}) \n ]]] \n [[[end]]] \n You can change the default facet size (the number of results shown for each facet) for a table using facet_size : \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"],\n \"facet_size\": 10\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Facets\"]", "references": "[]"}
{"id": "plugins:plugins-installed", "page": "plugins", "ref": "plugins-installed", "title": "Seeing what plugins are installed", "content": "You can see a list of installed plugins by navigating to the /-/plugins page of your Datasette instance - for example: https://fivethirtyeight.datasettes.com/-/plugins \n You can also use the datasette plugins command: \n datasette plugins \n Which outputs: \n [\n {\n \"name\": \"datasette_json_html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.4.0\"\n }\n] \n [[[cog\nfrom datasette import cli\nfrom click.testing import CliRunner\nimport textwrap, json\ncog.out(\"\\n\")\nresult = CliRunner().invoke(cli.cli, [\"plugins\", \"--all\"])\n# cog.out() with text containing newlines was unindenting for some reason\ncog.outl(\"If you run ``datasette plugins --all`` it will include default plugins that ship as part of Datasette:\\n\")\ncog.outl(\".. code-block:: json\\n\")\nplugins = [p for p in json.loads(result.output) if p[\"name\"].startswith(\"datasette.\")]\nindented = textwrap.indent(json.dumps(plugins, indent=4), \" \")\nfor line in indented.split(\"\\n\"):\n cog.outl(line)\ncog.out(\"\\n\\n\") \n ]]] \n If you run datasette plugins --all it will include default plugins that ship as part of Datasette: \n [\n {\n \"name\": \"datasette.actor_auth_cookie\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"actor_from_request\"\n ]\n },\n {\n \"name\": \"datasette.blob_renderer\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_output_renderer\"\n ]\n },\n {\n \"name\": \"datasette.default_magic_parameters\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_magic_parameters\"\n ]\n },\n {\n \"name\": \"datasette.default_menu_links\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"menu_links\"\n ]\n },\n {\n \"name\": \"datasette.default_permissions\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"actor_from_request\",\n \"permission_allowed\",\n \"register_permissions\",\n \"skip_csrf\"\n ]\n },\n {\n \"name\": \"datasette.events\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_events\"\n ]\n },\n {\n \"name\": \"datasette.facets\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"register_facet_classes\"\n ]\n },\n {\n \"name\": \"datasette.filters\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"filters_from_request\"\n ]\n },\n {\n \"name\": \"datasette.forbidden\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"forbidden\"\n ]\n },\n {\n \"name\": \"datasette.handle_exception\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"handle_exception\"\n ]\n },\n {\n \"name\": \"datasette.publish.cloudrun\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"publish_subcommand\"\n ]\n },\n {\n \"name\": \"datasette.publish.heroku\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"publish_subcommand\"\n ]\n },\n {\n \"name\": \"datasette.sql_functions\",\n \"static\": false,\n \"templates\": false,\n \"version\": null,\n \"hooks\": [\n \"prepare_connection\"\n ]\n }\n] \n [[[end]]] \n You can add the --plugins-dir= option to include any plugins found in that directory. \n Add --requirements to output a list of installed plugins that can then be installed in another Datasette instance using datasette install -r requirements.txt : \n datasette plugins --requirements \n The output will look something like this: \n datasette-codespaces==0.1.1\ndatasette-graphql==2.2\ndatasette-json-html==1.0.1\ndatasette-pretty-json==0.2.2\ndatasette-x-forwarded-host==0.1 \n To write that to a requirements.txt file, run this: \n datasette plugins --requirements > requirements.txt", "breadcrumbs": "[\"Plugins\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/-/plugins\", \"label\": \"https://fivethirtyeight.datasettes.com/-/plugins\"}]"}
{"id": "deploying:deploying-systemd", "page": "deploying", "ref": "deploying-systemd", "title": "Running Datasette using systemd", "content": "You can run Datasette on Ubuntu or Debian systems using systemd . \n First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . \n You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . \n Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . \n You can copy a test database into that folder like so: \n cd /home/ubuntu/datasette-root\ncurl -O https://latest.datasette.io/fixtures.db \n Create a file at /etc/systemd/system/datasette.service with the following contents: \n [Unit]\nDescription=Datasette\nAfter=network.target\n\n[Service]\nType=simple\nUser=ubuntu\nEnvironment=DATASETTE_SECRET=\nWorkingDirectory=/home/ubuntu/datasette-root\nExecStart=datasette serve . -h 127.0.0.1 -p 8000\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target \n Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: \n python3 -c 'import secrets; print(secrets.token_hex(32))' \n This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. \n You can start the Datasette process running using the following: \n sudo systemctl daemon-reload\nsudo systemctl start datasette.service \n You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: \n sudo systemctl restart datasette.service \n Once the service has started you can confirm that Datasette is running on port 8000 like so: \n curl 127.0.0.1:8000/-/versions.json\n# Should output JSON showing the installed version \n Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"}
{"id": "writing_plugins:writing-plugins-designing-urls", "page": "writing_plugins", "ref": "writing-plugins-designing-urls", "title": "Designing URLs for your plugin", "content": "You can register new URL routes within Datasette using the register_routes(datasette) plugin hook. \n Datasette's default URLs include these: \n \n \n /dbname - database page \n \n \n /dbname/tablename - table page \n \n \n /dbname/tablename/pk - row page \n \n \n See Pages and API endpoints and Introspection for more default URL routes. \n To avoid accidentally conflicting with a database file that may be loaded into Datasette, plugins should register URLs using a /-/ prefix. For example, if your plugin adds a new interface for uploading Excel files you might register a URL route like this one: \n \n \n /-/upload-excel \n \n \n Try to avoid registering URLs that clash with other plugins that your users might have installed. There is no central repository of reserved URL paths (yet) but you can review existing plugins by browsing the plugins directory . \n If your plugin includes functionality that relates to a specific database you could also register a URL route like this: \n \n \n /dbname/-/upload-excel \n \n \n Or for a specific table like this: \n \n \n /dbname/tablename/-/modify-table-schema \n \n \n Note that a row could have a primary key of - and this URL scheme will still work, because Datasette row pages do not ever have a trailing slash followed by additional path components.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[{\"href\": \"https://datasette.io/plugins\", \"label\": \"plugins directory\"}]"}
{"id": "changelog:control-http-caching-with-ttl", "page": "changelog", "ref": "control-http-caching-with-ttl", "title": "Control HTTP caching with ?_ttl=", "content": "You can now customize the HTTP max-age header that is sent on a per-URL basis, using the new ?_ttl= query string parameter. \n You can set this to any value in seconds, or you can set it to 0 to disable HTTP caching entirely. \n Consider for example this query which returns a randomly selected member of the Avengers: \n select * from [avengers/avengers] order by random() limit 1 \n If you hit the following page repeatedly you will get the same result, due to HTTP caching: \n /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1 \n By adding ?_ttl=0 to the zero you can ensure the page will not be cached and get back a different super hero every time: \n /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[{\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1\", \"label\": \"/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0\", \"label\": \"/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0\"}]"}
{"id": "changelog:id64", "page": "changelog", "ref": "id64", "title": "0.41 (2020-05-06)", "content": "You can now create custom pages within your Datasette instance using a custom template file. For example, adding a template file called templates/pages/about.html will result in a new page being served at /about on your instance. See the custom pages documentation for full details, including how to return custom HTTP headers, redirects and status codes. ( #648 ) \n Configuration directory mode ( #731 ) allows you to define a custom Datasette instance as a directory. So instead of running the following: \n datasette one.db two.db \\\n --metadata=metadata.json \\\n --template-dir=templates/ \\\n --plugins-dir=plugins \\\n --static css:css \n You can instead arrange your files in a single directory called my-project and run this: \n datasette my-project/ \n Also in this release: \n \n \n New NOT LIKE table filter: ?colname__notlike=expression . ( #750 ) \n \n \n Datasette now has a pattern portfolio at /-/patterns - e.g. https://latest.datasette.io/-/patterns . This is a page that shows every Datasette user interface component in one place, to aid core development and people building custom CSS themes. ( #151 ) \n \n \n SQLite PRAGMA functions such as pragma_table_info(tablename) are now allowed in Datasette SQL queries. ( #761 ) \n \n \n Datasette pages now consistently return a content-type of text/html; charset=utf-8\" . ( #752 ) \n \n \n Datasette now handles an ASGI raw_path value of None , which should allow compatibility with the Mangum adapter for running ASGI apps on AWS Lambda. Thanks, Colin Dellow. ( #719 ) \n \n \n Installation documentation now covers how to Using pipx . ( #756 ) \n \n \n Improved the documentation for Full-text search . ( #748 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/648\", \"label\": \"#648\"}, {\"href\": \"https://github.com/simonw/datasette/issues/731\", \"label\": \"#731\"}, {\"href\": \"https://github.com/simonw/datasette/issues/750\", \"label\": \"#750\"}, {\"href\": \"https://latest.datasette.io/-/patterns\", \"label\": \"https://latest.datasette.io/-/patterns\"}, {\"href\": \"https://github.com/simonw/datasette/issues/151\", \"label\": \"#151\"}, {\"href\": \"https://www.sqlite.org/pragma.html#pragfunc\", \"label\": \"PRAGMA functions\"}, {\"href\": \"https://github.com/simonw/datasette/issues/761\", \"label\": \"#761\"}, {\"href\": \"https://github.com/simonw/datasette/issues/752\", \"label\": \"#752\"}, {\"href\": \"https://github.com/erm/mangum\", \"label\": \"Mangum\"}, {\"href\": \"https://github.com/simonw/datasette/pull/719\", \"label\": \"#719\"}, {\"href\": \"https://github.com/simonw/datasette/issues/756\", \"label\": \"#756\"}, {\"href\": \"https://github.com/simonw/datasette/issues/748\", \"label\": \"#748\"}]"}
{"id": "installation:installing-plugins-using-pipx", "page": "installation", "ref": "installing-plugins-using-pipx", "title": "Installing plugins using pipx", "content": "You can install additional datasette plugins with pipx inject like so: \n pipx inject datasette datasette-json-html \n injected package datasette-json-html into venv datasette\ndone! \u2728 \ud83c\udf1f \u2728 \n Then to confirm the plugin was installed correctly: \n datasette plugins \n [\n {\n \"name\": \"datasette-json-html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"}
{"id": "full_text_search:full-text-search-custom-sql", "page": "full_text_search", "ref": "full-text-search-custom-sql", "title": "Searches using custom SQL", "content": "You can include full-text search results in custom SQL queries. The general pattern with SQLite search is to run the search as a sub-select that returns rowid values, then include those rowids in another part of the query. \n You can see the syntax for a basic search by running that search on a table page and then clicking \"View and edit SQL\" to see the underlying SQL. For example, consider this search for manafort is the US FARA database : \n /fara/FARA_All_ShortForms?_search=manafort \n If you click View and edit SQL you'll see that the underlying SQL looks like this: \n select\n rowid,\n Short_Form_Termination_Date,\n Short_Form_Date,\n Short_Form_Last_Name,\n Short_Form_First_Name,\n Registration_Number,\n Registration_Date,\n Registrant_Name,\n Address_1,\n Address_2,\n City,\n State,\n Zip\nfrom\n FARA_All_ShortForms\nwhere\n rowid in (\n select\n rowid\n from\n FARA_All_ShortForms_fts\n where\n FARA_All_ShortForms_fts match escape_fts(:search)\n )\norder by\n rowid\nlimit\n 101", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"manafort is the US FARA database\"}, {\"href\": \"https://fara.datasettes.com/fara?sql=select%0D%0A++rowid%2C%0D%0A++Short_Form_Termination_Date%2C%0D%0A++Short_Form_Date%2C%0D%0A++Short_Form_Last_Name%2C%0D%0A++Short_Form_First_Name%2C%0D%0A++Registration_Number%2C%0D%0A++Registration_Date%2C%0D%0A++Registrant_Name%2C%0D%0A++Address_1%2C%0D%0A++Address_2%2C%0D%0A++City%2C%0D%0A++State%2C%0D%0A++Zip%0D%0Afrom%0D%0A++FARA_All_ShortForms%0D%0Awhere%0D%0A++rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++FARA_All_ShortForms_fts%0D%0A++++where%0D%0A++++++FARA_All_ShortForms_fts+match+escape_fts%28%3Asearch%29%0D%0A++%29%0D%0Aorder+by%0D%0A++rowid%0D%0Alimit%0D%0A++101&search=manafort\", \"label\": \"View and edit SQL\"}]"}
{"id": "metadata:metadata-column-descriptions", "page": "metadata", "ref": "metadata-column-descriptions", "title": "Column descriptions", "content": "You can include descriptions for your columns by adding a \"columns\": {\"name-of-column\": \"description-of-column\"} block to your table metadata: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"columns\": {\n \"column1\": \"Description of column 1\",\n \"column2\": \"Description of column 2\"\n }\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n These will be displayed at the top of the table page, and will also show in the cog menu for each column. \n You can see an example of how these look at latest.datasette.io/fixtures/roadside_attractions .", "breadcrumbs": "[\"Metadata\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/roadside_attractions\", \"label\": \"latest.datasette.io/fixtures/roadside_attractions\"}]"}
{"id": "metadata:metadata-hiding-tables", "page": "metadata", "ref": "metadata-hiding-tables", "title": "Hiding tables", "content": "You can hide tables from the database listing view (in the same way that FTS and\n SpatiaLite tables are automatically hidden) using \"hidden\": true : \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"hidden\": True\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Metadata\"]", "references": "[]"}
{"id": "json_api:column-filter-arguments", "page": "json_api", "ref": "column-filter-arguments", "title": "Column filter arguments", "content": "You can filter the data returned by the table based on column values using a query string argument. \n \n \n ?column__exact=value or ?_column=value \n \n Returns rows where the specified column exactly matches the value. \n \n \n \n ?column__not=value \n \n Returns rows where the column does not match the value. \n \n \n \n ?column__contains=value \n \n Rows where the string column contains the specified value ( column like \"%value%\" in SQL). \n \n \n \n ?column__notcontains=value \n \n Rows where the string column does not contain the specified value ( column not like \"%value%\" in SQL). \n \n \n \n ?column__endswith=value \n \n Rows where the string column ends with the specified value ( column like \"%value\" in SQL). \n \n \n \n ?column__startswith=value \n \n Rows where the string column starts with the specified value ( column like \"value%\" in SQL). \n \n \n \n ?column__gt=value \n \n Rows which are greater than the specified value. \n \n \n \n ?column__gte=value \n \n Rows which are greater than or equal to the specified value. \n \n \n \n ?column__lt=value \n \n Rows which are less than the specified value. \n \n \n \n ?column__lte=value \n \n Rows which are less than or equal to the specified value. \n \n \n \n ?column__like=value \n \n Match rows with a LIKE clause, case insensitive and with % as the wildcard character. \n \n \n \n ?column__notlike=value \n \n Match rows that do not match the provided LIKE clause. \n \n \n \n ?column__glob=value \n \n Similar to LIKE but uses Unix wildcard syntax and is case sensitive. \n \n \n \n ?column__in=value1,value2,value3 \n \n Rows where column matches any of the provided values. \n You can use a comma separated string, or you can use a JSON array. \n The JSON array option is useful if one of your matching values itself contains a comma: \n ?column__in=[\"value\",\"value,with,commas\"] \n \n \n \n ?column__notin=value1,value2,value3 \n \n Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays. \n \n \n \n ?column__arraycontains=value \n \n Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__arraynotcontains=value \n \n Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__date=value \n \n Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 . \n \n \n \n ?column__isnull=1 \n \n Matches rows where the column is null. \n \n \n \n ?column__notnull=1 \n \n Matches rows where the column is not null. \n \n \n \n ?column__isblank=1 \n \n Matches rows where the column is blank, meaning null or the empty string. \n \n \n \n ?column__notblank=1 \n \n Matches rows where the column is not blank.", "breadcrumbs": "[\"JSON API\", \"Table arguments\"]", "references": "[]"}
{"id": "custom_templates:custom-pages-parameters", "page": "custom_templates", "ref": "custom-pages-parameters", "title": "Path parameters for pages", "content": "You can define custom pages that match multiple paths by creating files with {variable} definitions in their filenames. \n For example, to capture any request to a URL matching /about/* , you would create a template in the following location: \n templates/pages/about/{slug}.html \n A hit to /about/news would render that template and pass in a variable called slug with a value of \"news\" . \n If you use this mechanism don't forget to return a 404 if the referenced content could not be found. You can do this using {{ raise_404() }} described below. \n Templates defined using custom page routes work particularly well with the sql() template function from datasette-template-sql or the graphql() template function from datasette-graphql .", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-template-sql\", \"label\": \"datasette-template-sql\"}, {\"href\": \"https://github.com/simonw/datasette-graphql#the-graphql-template-function\", \"label\": \"datasette-graphql\"}]"}
{"id": "plugins:one-off-plugins-using-plugins-dir", "page": "plugins", "ref": "one-off-plugins-using-plugins-dir", "title": "One-off plugins using --plugins-dir", "content": "You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option: \n datasette mydb.db --plugins-dir=plugins/", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"}
{"id": "authentication:authentication-cli-create-token", "page": "authentication", "ref": "authentication-cli-create-token", "title": "datasette create-token", "content": "You can also create tokens on the command line using the datasette create-token command. \n This command takes one required argument - the ID of the actor to be associated with the created token. \n You can specify a -e/--expires-after option in seconds. If omitted, the token will never expire. \n The command will sign the token using the DATASETTE_SECRET environment variable, if available. You can also pass the secret using the --secret option. \n This means you can run the command locally to create tokens for use with a deployed Datasette instance, provided you know that instance's secret. \n To create a token for the root actor that will expire in one hour: \n datasette create-token root --expires-after 3600 \n To create a token that never expires using a specific secret: \n datasette create-token root --secret my-secret-goes-here", "breadcrumbs": "[\"Authentication and permissions\", \"API Tokens\"]", "references": "[]"}
{"id": "custom_templates:id1", "page": "custom_templates", "ref": "id1", "title": "Custom pages", "content": "You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. \n For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: \n datasette mydb.db --template-dir=templates/ \n You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html .", "breadcrumbs": "[\"Custom pages and templates\", \"Publishing static assets\"]", "references": "[]"}
{"id": "changelog:v1-0-a1", "page": "changelog", "ref": "v1-0-a1", "title": "1.0a1 (2022-12-01)", "content": "Write APIs now serve correct CORS headers if Datasette is started in --cors mode. See the full list of CORS headers in the documentation. ( #1922 ) \n \n \n Fixed a bug where the _memory database could be written to even though writes were not persisted. ( #1917 ) \n \n \n The https://latest.datasette.io/ demo instance now includes an ephemeral database which can be used to test Datasette's write APIs, using the new datasette-ephemeral-tables plugin to drop any created tables after five minutes. This database is only available if you sign in as the root user using the link on the homepage. ( #1915 ) \n \n \n Fixed a bug where hitting the write endpoints with a GET request returned a 500 error. It now returns a 405 (method not allowed) error instead. ( #1916 ) \n \n \n The list of endpoints in the API explorer now lists mutable databases first. ( #1918 ) \n \n \n The \"ignore\": true and \"replace\": true options for the insert API are now documented . ( #1924 )", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1922\", \"label\": \"#1922\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1917\", \"label\": \"#1917\"}, {\"href\": \"https://latest.datasette.io/\", \"label\": \"https://latest.datasette.io/\"}, {\"href\": \"https://datasette.io/plugins/datasette-ephemeral-tables\", \"label\": \"datasette-ephemeral-tables\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1915\", \"label\": \"#1915\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1916\", \"label\": \"#1916\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1918\", \"label\": \"#1918\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1924\", \"label\": \"#1924\"}]"}
{"id": "changelog:flash-messages", "page": "changelog", "ref": "flash-messages", "title": "Flash messages", "content": "Writable canned queries needed a mechanism to let the user know that the query has been successfully executed. The new flash messaging system ( #790 ) allows messages to persist in signed cookies which are then displayed to the user on the next page that they visit. Plugins can use this mechanism to display their own messages, see .add_message(request, message, type=datasette.INFO) for details. \n You can try out the new messages using the /-/messages debug tool, for example at https://latest.datasette.io/-/messages", "breadcrumbs": "[\"Changelog\", \"0.44 (2020-06-11)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/790\", \"label\": \"#790\"}, {\"href\": \"https://latest.datasette.io/-/messages\", \"label\": \"https://latest.datasette.io/-/messages\"}]"}
{"id": "sql_queries:canned-queries-json-api", "page": "sql_queries", "ref": "canned-queries-json-api", "title": "JSON API for writable canned queries", "content": "Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. \n To submit JSON to a writable canned query, encode key/value parameters as a JSON document: \n POST /mydatabase/add_message\n\n{\"message\": \"Message goes here\"} \n You can also continue to submit data using regular form encoding, like so: \n POST /mydatabase/add_message\n\nmessage=Message+goes+here \n There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. \n \n \n Set an Accept: application/json header on your request \n \n \n Include ?_json=1 in the URL that you POST to \n \n \n Include \"_json\": 1 in your JSON body, or &_json=1 in your form encoded body \n \n \n The JSON response will look like this: \n {\n \"ok\": true,\n \"message\": \"Query executed, 1 row affected\",\n \"redirect\": \"/data/add_name\"\n} \n The \"message\" and \"redirect\" values here will take into account on_success_message , on_success_message_sql , on_success_redirect , on_error_message and on_error_redirect , if they have been set.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\"]", "references": "[]"}
{"id": "changelog:id46", "page": "changelog", "ref": "id46", "title": "Smaller changes", "content": "Wide tables shown within Datasette now scroll horizontally ( #998 ). This is achieved using a new element which may impact the implementation of some plugins (for example this change to datasette-cluster-map ). \n \n \n New debug-menu permission. ( #1068 ) \n \n \n Removed --debug option, which didn't do anything. ( #814 ) \n \n \n Link: HTTP header pagination. ( #1014 ) \n \n \n x button for clearing filters. ( #1016 ) \n \n \n Edit SQL button on canned queries, ( #1019 ) \n \n \n --load-extension=spatialite shortcut. ( #1028 ) \n \n \n scale-in animation for column action menu. ( #1039 ) \n \n \n Option to pass a list of templates to .render_template() is now documented. ( #1045 ) \n \n \n New datasette.urls.static_plugins() method. ( #1033 ) \n \n \n datasette -o option now opens the most relevant page. ( #976 ) \n \n \n datasette --cors option now enables access to /database.db downloads. ( #1057 ) \n \n \n Database file downloads now implement cascading permissions, so you can download a database if you have view-database-download permission even if you do not have permission to access the Datasette instance. ( #1058 ) \n \n \n New documentation on Designing URLs for your plugin . ( #1053 )", "breadcrumbs": "[\"Changelog\", \"0.51 (2020-10-31)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/998\", \"label\": \"#998\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map/commit/fcb4abbe7df9071c5ab57defd39147de7145b34e\", \"label\": \"this change to datasette-cluster-map\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1068\", \"label\": \"#1068\"}, {\"href\": \"https://github.com/simonw/datasette/issues/814\", \"label\": \"#814\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1014\", \"label\": \"#1014\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1016\", \"label\": \"#1016\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1019\", \"label\": \"#1019\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1028\", \"label\": \"#1028\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1039\", \"label\": \"#1039\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1045\", \"label\": \"#1045\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1033\", \"label\": \"#1033\"}, {\"href\": \"https://github.com/simonw/datasette/issues/976\", \"label\": \"#976\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1057\", \"label\": \"#1057\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1058\", \"label\": \"#1058\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1053\", \"label\": \"#1053\"}]"}
{"id": "writing_plugins:writing-plugins-configuration", "page": "writing_plugins", "ref": "writing-plugins-configuration", "title": "Writing plugins that accept configuration", "content": "When you are writing plugins, you can access plugin configuration like this using the datasette plugin_config() method. If you know you need plugin configuration for a specific table, you can access it like this: \n plugin_config = datasette.plugin_config(\n \"datasette-cluster-map\", database=\"sf-trees\", table=\"Street_Tree_List\"\n) \n This will return the {\"latitude_column\": \"lat\", \"longitude_column\": \"lng\"} in the above example. \n If there is no configuration for that plugin, the method will return None . \n If it cannot find the requested configuration at the table layer, it will fall back to the database layer and then the root layer. For example, a user may have set the plugin configuration option inside datasette.yaml like so: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"plugins\": {\n \"datasette-cluster-map\": {\n \"latitude_column\": \"xlat\",\n \"longitude_column\": \"xlng\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n In this case, the above code would return that configuration for ANY table within the sf-trees database. \n The plugin configuration could also be set at the top level of datasette.yaml : \n [[[cog\nmetadata_example(cog, {\n \"plugins\": {\n \"datasette-cluster-map\": {\n \"latitude_column\": \"xlat\",\n \"longitude_column\": \"xlng\"\n }\n }\n}) \n ]]] \n [[[end]]] \n Now that datasette-cluster-map plugin configuration will apply to every table in every database.", "breadcrumbs": "[\"Writing plugins\"]", "references": "[]"}
{"id": "testing_plugins:testing-plugins-register-in-test", "page": "testing_plugins", "ref": "testing-plugins-register-in-test", "title": "Registering a plugin for the duration of a test", "content": "When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this: \n from datasette import hookimpl\nfrom datasette.app import Datasette\nfrom datasette.plugins import pm\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_using_test_plugin():\n class TestPlugin:\n __name__ = \"TestPlugin\"\n\n # Use hookimpl and method names to register hooks\n @hookimpl\n def register_routes(self):\n return [\n (r\"^/error$\", lambda: 1 / 0),\n ]\n\n pm.register(TestPlugin(), name=\"undo\")\n try:\n # The test implementation goes here\n datasette = Datasette()\n response = await datasette.client.get(\"/error\")\n assert response.status_code == 500\n finally:\n pm.unregister(name=\"undo\") \n To reuse the same temporary plugin in multiple tests, you can register it inside a fixture in your conftest.py file like this: \n from datasette import hookimpl\nfrom datasette.app import Datasette\nfrom datasette.plugins import pm\nimport pytest\nimport pytest_asyncio\n\n\n@pytest_asyncio.fixture\nasync def datasette_with_plugin():\n class TestPlugin:\n __name__ = \"TestPlugin\"\n\n @hookimpl\n def register_routes(self):\n return [\n (r\"^/error$\", lambda: 1 / 0),\n ]\n\n pm.register(TestPlugin(), name=\"undo\")\n try:\n yield Datasette()\n finally:\n pm.unregister(name=\"undo\")\n \n Note the yield statement here - this ensures that the finally: block that unregisters the plugin is executed only after the test function itself has completed. \n Then in a test: \n @pytest.mark.asyncio\nasync def test_error(datasette_with_plugin):\n response = await datasette_with_plugin.client.get(\"/error\")\n assert response.status_code == 500", "breadcrumbs": "[\"Testing plugins\"]", "references": "[]"}
{"id": "changelog:v1-0-a5", "page": "changelog", "ref": "v1-0-a5", "title": "1.0a5 (2023-08-29)", "content": "When restrictions are applied to API tokens , those restrictions now behave slightly differently: applying the view-table restriction will imply the ability to view-database for the database containing that table, and both view-table and view-database will imply view-instance . Previously you needed to create a token with restrictions that explicitly listed view-instance and view-database and view-table in order to view a table without getting a permission denied error. ( #2102 ) \n \n \n New datasette.yaml (or .json ) configuration file, which can be specified using datasette -c path-to-file . The goal here to consolidate settings, plugin configuration, permissions, canned queries, and other Datasette configuration into a single single file, separate from metadata.yaml . The legacy settings.json config file used for Configuration directory mode has been removed, and datasette.yaml has a \"settings\" section where the same settings key/value pairs can be included. In the next future alpha release, more configuration such as plugins/permissions/canned queries will be moved to the datasette.yaml file. See #2093 for more details. Thanks, Alex Garcia. \n \n \n The -s/--setting option can now take dotted paths to nested settings. These will then be used to set or over-ride the same options as are present in the new configuration file. ( #2156 ) \n \n \n New --actor '{\"id\": \"json-goes-here\"}' option for use with datasette --get to treat the simulated request as being made by a specific actor, see datasette --get . ( #2153 ) \n \n \n The Datasette _internal database has had some changes. It no longer shows up in the datasette.databases list by default, and is now instead available to plugins using the datasette.get_internal_database() . Plugins are invited to use this as a private database to store configuration and settings and secrets that should not be made visible through the default Datasette interface. Users can pass the new --internal internal.db option to persist that internal database to disk. Thanks, Alex Garcia. ( #2157 ).", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/2102\", \"label\": \"#2102\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2093\", \"label\": \"#2093\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2156\", \"label\": \"#2156\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2153\", \"label\": \"#2153\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2157\", \"label\": \"#2157\"}]"}
{"id": "changelog:foreign-key-expansions", "page": "changelog", "ref": "foreign-key-expansions", "title": "Foreign key expansions", "content": "When Datasette detects a foreign key reference it attempts to resolve a label\n for that reference (automatically or using the Specifying the label column for a table metadata\n option) so it can display a link to the associated row. \n This expansion is now also available for JSON and CSV representations of the\n table, using the new _labels=on query string option. See\n Expanding foreign key references for more details.", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[]"}
{"id": "settings:setting-facet-suggest-time-limit-ms", "page": "settings", "ref": "setting-facet-suggest-time-limit-ms", "title": "facet_suggest_time_limit_ms", "content": "When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet. \n You can increase this time limit like so: \n datasette mydatabase.db --setting facet_suggest_time_limit_ms 500", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"}
{"id": "full_text_search:configuring-fts-by-hand", "page": "full_text_search", "ref": "configuring-fts-by-hand", "title": "Configuring FTS by hand", "content": "We recommend using sqlite-utils , but if you want to hand-roll a SQLite full-text search table you can do so using the following SQL. \n To enable full-text search for a table called items that works against the name and description columns, you would run this SQL to create a new items_fts FTS virtual table: \n CREATE VIRTUAL TABLE \"items_fts\" USING FTS4 (\n name,\n description,\n content=\"items\"\n); \n This creates a set of tables to power full-text search against items . The new items_fts table will be detected by Datasette as the fts_table for the items table. \n Creating the table is not enough: you also need to populate it with a copy of the data that you wish to make searchable. You can do that using the following SQL: \n INSERT INTO \"items_fts\" (rowid, name, description)\n SELECT rowid, name, description FROM items; \n If your table has columns that are foreign key references to other tables you can include that data in your full-text search index using a join. Imagine the items table has a foreign key column called category_id which refers to a categories table - you could create a full-text search table like this: \n CREATE VIRTUAL TABLE \"items_fts\" USING FTS4 (\n name,\n description,\n category_name,\n content=\"items\"\n); \n And then populate it like this: \n INSERT INTO \"items_fts\" (rowid, name, description, category_name)\n SELECT items.rowid,\n items.name,\n items.description,\n categories.name\n FROM items JOIN categories ON items.category_id=categories.id; \n You can use this technique to populate the full-text search index from any combination of tables and joins that makes sense for your project.", "breadcrumbs": "[\"Full-text search\", \"Enabling full-text search for a SQLite table\"]", "references": "[{\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}]"}
{"id": "testing_plugins:id1", "page": "testing_plugins", "ref": "id1", "title": "Testing plugins", "content": "We recommend using pytest to write automated tests for your plugins. \n If you use the template described in Starting an installable plugin using cookiecutter your plugin will start with a single test in your tests/ directory that looks like this: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200\n installed_plugins = {p[\"name\"] for p in response.json()}\n assert (\n \"datasette-plugin-template-demo\"\n in installed_plugins\n ) \n This test uses the datasette.client object to exercise a test instance of Datasette. datasette.client is a wrapper around the HTTPX Python library which can imitate HTTP requests using ASGI. This is the recommended way to write tests against a Datasette instance. \n This test also uses the pytest-asyncio package to add support for async def test functions running under pytest. \n You can install these packages like so: \n pip install pytest pytest-asyncio \n If you are building an installable package you can add them as test dependencies to your setup.py module like this: \n setup(\n name=\"datasette-my-plugin\",\n # ...\n extras_require={\"test\": [\"pytest\", \"pytest-asyncio\"]},\n tests_require=[\"datasette-my-plugin[test]\"],\n) \n You can then install the test dependencies like so: \n pip install -e '.[test]' \n Then run the tests using pytest like so: \n pytest", "breadcrumbs": "[]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX\"}, {\"href\": \"https://pypi.org/project/pytest-asyncio/\", \"label\": \"pytest-asyncio\"}]"}
{"id": "changelog:id86", "page": "changelog", "ref": "id86", "title": "Small changes", "content": "We now show the size of the database file next to the download link ( #172 ) \n \n \n New /-/databases introspection page shows currently connected databases ( #470 ) \n \n \n Binary data is no longer displayed on the table and row pages ( #442 - thanks, Russ Garrett) \n \n \n New show/hide SQL links on custom query pages ( #415 ) \n \n \n The extra_body_script plugin hook now accepts an optional view_name argument ( #443 - thanks, Russ Garrett) \n \n \n Bumped Jinja2 dependency to 2.10.1 ( #426 ) \n \n \n All table filters are now documented, and documentation is enforced via unit tests ( 2c19a27 ) \n \n \n New project guideline: master should stay shippable at all times! ( 31f36e1 ) \n \n \n Fixed a bug where sqlite_timelimit() occasionally failed to clean up after itself ( bac4e01 ) \n \n \n We no longer load additional plugins when executing pytest ( #438 ) \n \n \n Homepage now links to database views if there are less than five tables in a database ( #373 ) \n \n \n The --cors option is now respected by error pages ( #453 ) \n \n \n datasette publish heroku now uses the --include-vcs-ignore option, which means it works under Travis CI ( #407 ) \n \n \n datasette publish heroku now publishes using Python 3.6.8 ( 666c374 ) \n \n \n Renamed datasette publish now to datasette publish nowv1 ( #472 ) \n \n \n datasette publish nowv1 now accepts multiple --alias parameters ( 09ef305 ) \n \n \n Removed the datasette skeleton command ( #476 ) \n \n \n The documentation on how to build the documentation now recommends sphinx-autobuild", "breadcrumbs": "[\"Changelog\", \"0.28 (2019-05-19)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/172\", \"label\": \"#172\"}, {\"href\": \"https://github.com/simonw/datasette/issues/470\", \"label\": \"#470\"}, {\"href\": \"https://github.com/simonw/datasette/pull/442\", \"label\": \"#442\"}, {\"href\": \"https://github.com/simonw/datasette/issues/415\", \"label\": \"#415\"}, {\"href\": \"https://github.com/simonw/datasette/pull/443\", \"label\": \"#443\"}, {\"href\": \"https://github.com/simonw/datasette/pull/426\", \"label\": \"#426\"}, {\"href\": \"https://github.com/simonw/datasette/commit/2c19a27d15a913e5f3dd443f04067169a6f24634\", \"label\": \"2c19a27\"}, {\"href\": \"https://github.com/simonw/datasette/commit/31f36e1b97ccc3f4387c80698d018a69798b6228\", \"label\": \"31f36e1\"}, {\"href\": \"https://github.com/simonw/datasette/commit/bac4e01f40ae7bd19d1eab1fb9349452c18de8f5\", \"label\": \"bac4e01\"}, {\"href\": \"https://github.com/simonw/datasette/issues/438\", \"label\": \"#438\"}, {\"href\": \"https://github.com/simonw/datasette/issues/373\", \"label\": \"#373\"}, {\"href\": \"https://github.com/simonw/datasette/issues/453\", \"label\": \"#453\"}, {\"href\": \"https://github.com/simonw/datasette/pull/407\", \"label\": \"#407\"}, {\"href\": \"https://github.com/simonw/datasette/commit/666c37415a898949fae0437099d62a35b1e9c430\", \"label\": \"666c374\"}, {\"href\": \"https://github.com/simonw/datasette/issues/472\", \"label\": \"#472\"}, {\"href\": \"https://github.com/simonw/datasette/commit/09ef305c687399384fe38487c075e8669682deb4\", \"label\": \"09ef305\"}, {\"href\": \"https://github.com/simonw/datasette/issues/476\", \"label\": \"#476\"}]"}
{"id": "publish:publish-vercel", "page": "publish", "ref": "publish-vercel", "title": "Publishing to Vercel", "content": "Vercel - previously known as Zeit Now - provides a layer over AWS Lambda to allow for quick, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the datasette-publish-vercel plugin. \n pip install datasette-publish-vercel\ndatasette publish vercel mydatabase.db --project my-database-project \n Not every feature is supported: consult the datasette-publish-vercel README for more details.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://vercel.com/\", \"label\": \"Vercel\"}, {\"href\": \"https://github.com/simonw/datasette-publish-vercel\", \"label\": \"datasette-publish-vercel\"}, {\"href\": \"https://github.com/simonw/datasette-publish-vercel/blob/main/README.md\", \"label\": \"datasette-publish-vercel README\"}]"}
{"id": "changelog:id212", "page": "changelog", "ref": "id212", "title": "0.8 (2017-11-13)", "content": "V0.8 - added PyPI metadata, ready to ship. \n \n \n Implemented offset/limit pagination for views ( #70 ). \n \n \n Improved pagination. ( #78 ) \n \n \n Limit on max rows returned, controlled by --max_returned_rows option. ( #69 ) \n If someone executes 'select * from table' against a table with a million rows\n in it, we could run into problems: just serializing that much data as JSON is\n likely to lock up the server. \n Solution: we now have a hard limit on the maximum number of rows that can be\n returned by a query. If that limit is exceeded, the server will return a\n \"truncated\": true field in the JSON. \n This limit can be optionally controlled by the new --max_returned_rows \n option. Setting that option to 0 disables the limit entirely.", "breadcrumbs": "[\"Changelog\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/70\", \"label\": \"#70\"}, {\"href\": \"https://github.com/simonw/datasette/issues/78\", \"label\": \"#78\"}, {\"href\": \"https://github.com/simonw/datasette/issues/69\", \"label\": \"#69\"}]"}
{"id": "internals:internals-utils-await-me-maybe", "page": "internals", "ref": "internals-utils-await-me-maybe", "title": "await_me_maybe(value)", "content": "Utility function for calling await on a return value if it is awaitable, otherwise returning the value. This is used by Datasette to support plugin hooks that can optionally return awaitable functions. Read more about this function in The \u201cawait me maybe\u201d pattern for Python asyncio . \n \n \n async datasette.utils. await_me_maybe value : Any Any \n \n If value is callable, call it. If awaitable, await it. Otherwise return it.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://simonwillison.net/2020/Sep/2/await-me-maybe/\", \"label\": \"The \u201cawait me maybe\u201d pattern for Python asyncio\"}]"}
{"id": "cli-reference:cli-help-uninstall-help", "page": "cli-reference", "ref": "cli-help-uninstall-help", "title": "datasette uninstall", "content": "Uninstall one or more plugins. \n [[[cog\nhelp([\"uninstall\", \"--help\"]) \n ]]] \n Usage: datasette uninstall [OPTIONS] PACKAGES...\n\n Uninstall plugins and Python packages from the Datasette environment\n\nOptions:\n -y, --yes Don't ask for confirmation\n --help Show this message and exit. \n [[[end]]]", "breadcrumbs": "[\"CLI reference\"]", "references": "[]"}
{"id": "authentication:permissions-view-instance", "page": "authentication", "ref": "permissions-view-instance", "title": "view-instance", "content": "Top level permission - Actor is allowed to view any pages within this instance, starting at https://latest.datasette.io/ \n Default allow .", "breadcrumbs": "[\"Authentication and permissions\", \"Built-in permissions\"]", "references": "[{\"href\": \"https://latest.datasette.io/\", \"label\": \"https://latest.datasette.io/\"}]"}