{"id": "internals:datasette-resolve-database", "page": "internals", "ref": "datasette-resolve-database", "title": ".resolve_database(request)", "content": "request - Request object \n \n A request object \n \n \n \n If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object. \n This returns a Database instance. \n If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-resolve-row", "page": "internals", "ref": "datasette-resolve-row", "title": ".resolve_row(request)", "content": "request - Request object \n \n A request object \n \n \n \n This method assumes your route declares named groups for database , table and pks . \n It returns a ResolvedRow named tuple instance with the following fields: \n \n \n db - Database \n \n The database object \n \n \n \n table - string \n \n The name of the table \n \n \n \n sql - string \n \n SQL snippet that can be used in a WHERE clause to select the row \n \n \n \n params - dict \n \n Parameters that should be passed to the SQL query \n \n \n \n pks - list \n \n List of primary key column names \n \n \n \n pk_values - list \n \n List of primary key values decoded from the URL \n \n \n \n row - sqlite3.Row \n \n The row itself \n \n \n \n If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. \n If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception. \n If the row cannot be found it raises a datasette.utils.asgi.RowNotFound exception. This has .database_name , .table and .pk_values attributes, extracted from the request path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-resolve-table", "page": "internals", "ref": "datasette-resolve-table", "title": ".resolve_table(request)", "content": "request - Request object \n \n A request object \n \n \n \n This assumes that the regular expression for your route declares both a database and a table named group. \n It returns a ResolvedTable named tuple instance with the following fields: \n \n \n db - Database \n \n The database object \n \n \n \n table - string \n \n The name of the table (or view) \n \n \n \n is_view - boolean \n \n True if this is a view, False if it is a table \n \n \n \n If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. \n If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-setting", "page": "internals", "ref": "datasette-setting", "title": ".setting(key)", "content": "key - string \n \n The name of the setting, e.g. base_url . \n \n \n \n Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. \n For example: \n downloads_are_allowed = datasette.setting(\"allow_download\")", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-track-event", "page": "internals", "ref": "datasette-track-event", "title": "await .track_event(event)", "content": "event - Event \n \n An instance of a subclass of datasette.events.Event . \n \n \n \n Plugins can call this to track events, using classes they have previously registered. See Event tracking for details. \n The event will then be passed to all plugins that have registered to receive events using the track_event(datasette, event) hook. \n Example usage, assuming the plugin has previously registered the BanUserEvent class: \n await datasette.track_event(\n BanUserEvent(user={\"id\": 1, \"username\": \"cleverbot\"})\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-unsign", "page": "internals", "ref": "datasette-unsign", "title": ".unsign(value, namespace=\"default\")", "content": "signed - any serializable type \n \n The signed string that was created using .sign(value, namespace=\"default\") . \n \n \n \n namespace - string, optional \n \n The alternative namespace, if one was used. \n \n \n \n Returns the original, decoded object that was passed to .sign(value, namespace=\"default\") . If the signature is not valid this raises a itsdangerous.BadSignature exception.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:id1", "page": "internals", "ref": "id1", "title": ".get_internal_database()", "content": "Returns a database object for reading and writing to the private internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:internals", "page": "internals", "ref": "internals", "title": "Internals for plugins", "content": "Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.", "breadcrumbs": "[]", "references": "[]"} {"id": "internals:internals-database", "page": "internals", "ref": "internals-database", "title": "Database class", "content": "Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-database-introspection", "page": "internals", "ref": "internals-database-introspection", "title": "Database introspection", "content": "The Database class also provides properties and methods for introspecting the database. \n \n \n db.name - string \n \n The name of the database - usually the filename without the .db prefix. \n \n \n \n db.size - integer \n \n The size of the database file in bytes. 0 for :memory: databases. \n \n \n \n db.mtime_ns - integer or None \n \n The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases. \n \n \n \n db.is_mutable - boolean \n \n Is this database mutable, and allowed to accept writes? \n \n \n \n db.is_memory - boolean \n \n Is this database an in-memory database? \n \n \n \n await db.attached_databases() - list of named tuples \n \n Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file . \n \n \n \n await db.table_exists(table) - boolean \n \n Check if a table called table exists. \n \n \n \n await db.view_exists(view) - boolean \n \n Check if a view called view exists. \n \n \n \n await db.table_names() - list of strings \n \n List of names of tables in the database. \n \n \n \n await db.view_names() - list of strings \n \n List of names of views in the database. \n \n \n \n await db.table_columns(table) - list of strings \n \n Names of columns in a specific table. \n \n \n \n await db.table_column_details(table) - list of named tuples \n \n Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0). \n \n \n \n await db.primary_keys(table) - list of strings \n \n Names of the columns that are part of the primary key for this table. \n \n \n \n await db.fts_table(table) - string or None \n \n The name of the FTS table associated with this table, if one exists. \n \n \n \n await db.label_column_for_table(table) - string or None \n \n The label column that is associated with this table - either automatically detected or using the \"label_column\" key from Metadata , see Specifying the label column for a table . \n \n \n \n await db.foreign_keys_for_table(table) - list of dictionaries \n \n Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {\"column\": string, \"other_table\": string, \"other_column\": string} . \n \n \n \n await db.hidden_table_names() - list of strings \n \n List of tables which Datasette \"hides\" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature. \n \n \n \n await db.get_table_definition(table) - string \n \n Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements. \n \n \n \n await db.get_view_definition(view) - string \n \n Returns the SQL definition of the named view. \n \n \n \n await db.get_all_foreign_keys() - dictionary \n \n Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, \"incoming\" and \"outgoing\" , each of which is a list of dictionaries with keys \"column\" , \"other_table\" and \"other_column\" . For example: \n {\n \"incoming\": [],\n \"outgoing\": [\n {\n \"other_table\": \"attraction_characteristic\",\n \"column\": \"characteristic_id\",\n \"other_column\": \"pk\",\n },\n {\n \"other_table\": \"roadside_attractions\",\n \"column\": \"attraction_id\",\n \"other_column\": \"pk\",\n }\n ]\n}", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:internals-datasette", "page": "internals", "ref": "internals-datasette", "title": "Datasette class", "content": "This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette . \n You can create your own instance of this - for example to help write tests for a plugin - like so: \n from datasette.app import Datasette\n\n# With no arguments a single in-memory database will be attached\ndatasette = Datasette()\n\n# The files= argument can load files from disk\ndatasette = Datasette(files=[\"/path/to/my-database.db\"])\n\n# Pass metadata as a JSON dictionary like this\ndatasette = Datasette(\n files=[\"/path/to/my-database.db\"],\n metadata={\n \"databases\": {\n \"my-database\": {\n \"description\": \"This is my database\"\n }\n }\n },\n) \n Constructor parameters include: \n \n \n files=[...] - a list of database files to open \n \n \n immutables=[...] - a list of database files to open in immutable mode \n \n \n metadata={...} - a dictionary of Metadata \n \n \n config_dir=... - the configuration directory to use, stored in datasette.config_dir", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-datasette-urls", "page": "internals", "ref": "internals-datasette-urls", "title": "datasette.urls", "content": "The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect. \n \n \n datasette.urls.instance(format=None) \n \n Returns the URL to the Datasette instance root page. This is usually \"/\" . \n \n \n \n datasette.urls.path(path, format=None) \n \n Takes a path and returns the full path, taking base_url into account. \n For example, datasette.urls.path(\"-/logout\") will return the path to the logout page, which will be \"/-/logout\" by default or /prefix-path/-/logout if base_url is set to /prefix-path/ \n \n \n \n datasette.urls.logout() \n \n Returns the URL to the logout page, usually \"/-/logout\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.static_plugins(plugin_name, path) \n \n Returns the URL of one of the static assets belonging to a plugin. \n datasette.urls.static_plugins(\"datasette_cluster_map\", \"datasette-cluster-map.js\") would return \"/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js\" \n \n \n \n datasette.urls.static(path) \n \n Returns the URL of one of Datasette's default static assets, for example \"/-/static/app.css\" \n \n \n \n datasette.urls.database(database_name, format=None) \n \n Returns the URL to a database page, for example \"/fixtures\" \n \n \n \n datasette.urls.table(database_name, table_name, format=None) \n \n Returns the URL to a table page, for example \"/fixtures/facetable\" \n \n \n \n datasette.urls.query(database_name, query_name, format=None) \n \n Returns the URL to a query page, for example \"/fixtures/pragma_cache_size\" \n \n \n \n These functions can be accessed via the {{ urls }} object in Datasette templates, for example: \n Homepage\nFixtures database\nfacetable table\npragma_cache_size query \n Use the format=\"json\" (or \"csv\" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end. \n These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:internals-internal", "page": "internals", "ref": "internals-internal", "title": "Datasette's internal database", "content": "Datasette maintains an \"internal\" SQLite database used for configuration, caching, and storage. Plugins can store configuration, settings, and other data inside this database. By default, Datasette will use a temporary in-memory SQLite database as the internal database, which is created at startup and destroyed at shutdown. Users of Datasette can optionally pass in a --internal flag to specify the path to a SQLite database to use as the internal database, which will persist internal data across Datasette instances. \n Datasette maintains tables called catalog_databases , catalog_tables , catalog_columns , catalog_indexes , catalog_foreign_keys with details of the attached databases and their schemas. These tables should not be considered a stable API - they may change between Datasette releases. \n The internal database is not exposed in the Datasette application by default, which means private data can safely be stored without worry of accidentally leaking information through the default Datasette interface and API. However, other plugins do have full read and write access to the internal database. \n Plugins can access this database by calling internal_db = datasette.get_internal_database() and then executing queries using the Database API . \n Plugin authors are asked to practice good etiquette when using the internal database, as all plugins use the same database to store data. For example: \n \n \n Use a unique prefix when creating tables, indices, and triggers in the internal database. If your plugin is called datasette-xyz , then prefix names with datasette_xyz_* . \n \n \n Avoid long-running write statements that may stall or block other plugins that are trying to write at the same time. \n \n \n Use temporary tables or shared in-memory attached databases when possible. \n \n \n Avoid implementing features that could expose private data stored in the internal database by other plugins.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-multiparams", "page": "internals", "ref": "internals-multiparams", "title": "The MultiParams class", "content": "request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. \n Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . \n \n \n request.args[key] - string \n \n Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[\"foo\"] would return \"1\" . \n \n \n \n request.args.get(key) - string or None \n \n Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(\"q\", \"\") . \n \n \n \n request.args.getlist(key) - list of strings \n \n Returns the list of strings for that key. request.args.getlist(\"foo\") would return [\"1\", \"2\"] in the above example. request.args.getlist(\"bar\") would return [\"3\"] . If the key is missing an empty list will be returned. \n \n \n \n request.args.keys() - list of strings \n \n Returns the list of available keys - for the example this would be [\"foo\", \"bar\"] . \n \n \n \n key in request.args - True or False \n \n You can use if key in request.args to check if a key is present. \n \n \n \n for key in request.args - iterator \n \n This lets you loop through every available key. \n \n \n \n len(request.args) - integer \n \n Returns the number of keys.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-response", "page": "internals", "ref": "internals-response", "title": "Response class", "content": "The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. \n The Response() constructor takes the following arguments: \n \n \n body - string \n \n The body of the response. \n \n \n \n status - integer (optional) \n \n The HTTP status - defaults to 200. \n \n \n \n headers - dictionary (optional) \n \n A dictionary of extra HTTP headers, e.g. {\"x-hello\": \"world\"} . \n \n \n \n content_type - string (optional) \n \n The content-type for the response. Defaults to text/plain . \n \n \n \n For example: \n from datasette.utils.asgi import Response\n\nresponse = Response(\n \"This is XML\",\n content_type=\"application/xml; charset=utf-8\",\n) \n The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: \n from datasette.utils.asgi import Response\n\nhtml_response = Response.html(\"This is HTML\")\njson_response = Response.json({\"this_is\": \"json\"})\ntext_response = Response.text(\n \"This will become utf-8 encoded text\"\n)\n# Redirects are served as 302, unless you pass status=301:\nredirect_response = Response.redirect(\n \"https://latest.datasette.io/\"\n) \n Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. \n Each of the helper methods take optional status= and headers= arguments, documented above.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-response-asgi-send", "page": "internals", "ref": "internals-response-asgi-send", "title": "Returning a response with .asgi_send(send)", "content": "In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. \n Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: \n async def require_authorization(scope, receive, send):\n response = Response.text(\n \"401 Authorization Required\",\n headers={\n \"www-authenticate\": 'Basic realm=\"Datasette\", charset=\"UTF-8\"'\n },\n status=401,\n )\n await response.asgi_send(send)", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "internals:internals-response-set-cookie", "page": "internals", "ref": "internals-response-set-cookie", "title": "Setting cookies with response.set_cookie()", "content": "To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: \n def set_cookie(\n self,\n key,\n value=\"\",\n max_age=None,\n expires=None,\n path=\"/\",\n domain=None,\n secure=False,\n httponly=False,\n samesite=\"lax\",\n): ... \n You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n)\nreturn response", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"} {"id": "internals:internals-shortcuts", "page": "internals", "ref": "internals-shortcuts", "title": "Import shortcuts", "content": "The following commonly used symbols can be imported directly from the datasette module: \n from datasette import Response\nfrom datasette import Forbidden\nfrom datasette import NotFound\nfrom datasette import hookimpl\nfrom datasette import actor_matches_allow", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"} {"id": "internals:internals-utils-derive-named-parameters", "page": "internals", "ref": "internals-utils-derive-named-parameters", "title": "derive_named_parameters(db, sql)", "content": "Derive the list of named parameters referenced in a SQL query, using an explain query executed against the provided database. \n \n \n async datasette.utils. derive_named_parameters db : Database sql : str List [ str ] \n \n Given a SQL statement, return a list of named parameters that are used in the statement \n e.g. for select * from foo where id=:id this would return [\"id\"]", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"} {"id": "internals:internals-utils-parse-metadata", "page": "internals", "ref": "internals-utils-parse-metadata", "title": "parse_metadata(content)", "content": "This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string. \n If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception. \n \n \n datasette.utils. parse_metadata content : str dict \n \n Detects if content is JSON or YAML and parses it appropriately.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"} {"id": "introspection:id1", "page": "introspection", "ref": "id1", "title": "Introspection", "content": "Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. \n Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.", "breadcrumbs": "[]", "references": "[]"} {"id": "introspection:jsondataview-actor", "page": "introspection", "ref": "jsondataview-actor", "title": "/-/actor", "content": "Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. \n {\n \"actor\": {\n \"id\": 1,\n \"username\": \"some-user\"\n }\n}", "breadcrumbs": "[\"Introspection\"]", "references": "[]"} {"id": "introspection:messagesdebugview", "page": "introspection", "ref": "messagesdebugview", "title": "/-/messages", "content": "The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.", "breadcrumbs": "[\"Introspection\"]", "references": "[]"} {"id": "javascript_plugins:id1", "page": "javascript_plugins", "ref": "id1", "title": "JavaScript plugins", "content": "Datasette can run custom JavaScript in several different ways: \n \n \n Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page \n \n \n Datasette instances with custom templates can include additional JavaScript in those templates \n \n \n The extra_js_urls key in datasette.yaml can be used to include extra JavaScript \n \n \n There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of. \n \n Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance.", "breadcrumbs": "[]", "references": "[]"} {"id": "javascript_plugins:id2", "page": "javascript_plugins", "ref": "id2", "title": "JavaScript plugin objects", "content": "JavaScript plugins are blocks of code that can be registered with Datasette using the registerPlugin() method on the datasetteManager object. \n The implementation object passed to this method should include a version key defining the plugin version, and one or more of the following named functions providing the implementation of the plugin:", "breadcrumbs": "[\"JavaScript plugins\"]", "references": "[]"} {"id": "javascript_plugins:javascript-datasette-init", "page": "javascript_plugins", "ref": "javascript-datasette-init", "title": "The datasette_init event", "content": "Datasette emits a custom event called datasette_init when the page is loaded. This event is dispatched on the document object, and includes a detail object with a reference to the datasetteManager object. \n Your JavaScript code can listen out for this event using document.addEventListener() like this: \n document.addEventListener(\"datasette_init\", function (evt) {\n const manager = evt.detail;\n console.log(\"Datasette version:\", manager.VERSION);\n});", "breadcrumbs": "[\"JavaScript plugins\"]", "references": "[]"} {"id": "javascript_plugins:javascript-datasette-manager", "page": "javascript_plugins", "ref": "javascript-datasette-manager", "title": "datasetteManager", "content": "The datasetteManager object \n \n \n VERSION - string \n \n The version of Datasette \n \n \n \n plugins - Map() \n \n A Map of currently loaded plugin names to plugin implementations \n \n \n \n registerPlugin(name, implementation) \n \n Call this to register a plugin, passing its name and implementation \n \n \n \n selectors - object \n \n An object providing named aliases to useful CSS selectors, listed below", "breadcrumbs": "[\"JavaScript plugins\"]", "references": "[]"} {"id": "javascript_plugins:javascript-datasette-manager-selectors", "page": "javascript_plugins", "ref": "javascript-datasette-manager-selectors", "title": "Selectors", "content": "These are available on the selectors property of the datasetteManager object. \n const DOM_SELECTORS = {\n /** Should have one match */\n jsonExportLink: \".export-links a[href*=json]\",\n\n /** Event listeners that go outside of the main table, e.g. existing scroll listener */\n tableWrapper: \".table-wrapper\",\n table: \"table.rows-and-columns\",\n aboveTablePanel: \".above-table-panel\",\n\n // These could have multiple matches\n /** Used for selecting table headers. Use makeColumnActions if you want to add menu items. */\n tableHeaders: `table.rows-and-columns th`,\n\n /** Used to add \"where\" clauses to query using direct manipulation */\n filterRows: \".filter-row\",\n /** Used to show top available enum values for a column (\"facets\") */\n facetResults: \".facet-results [data-column]\",\n};", "breadcrumbs": "[\"JavaScript plugins\"]", "references": "[]"} {"id": "javascript_plugins:javascript-plugins-makeabovetablepanelconfigs", "page": "javascript_plugins", "ref": "javascript-plugins-makeabovetablepanelconfigs", "title": "makeAboveTablePanelConfigs()", "content": "This method should return a JavaScript array of objects defining additional panels to be added to the top of the table page. Each object should have the following: \n \n \n id - string \n \n A unique string ID for the panel, for example map-panel \n \n \n \n label - string \n \n A human-readable label for the panel \n \n \n \n render(node) - function \n \n A function that will be called with a DOM node to render the panel into \n \n \n \n This example shows how a plugin might define a single panel: \n document.addEventListener('datasette_init', function(ev) {\n ev.detail.registerPlugin('panel-plugin', {\n version: 0.1,\n makeAboveTablePanelConfigs: () => {\n return [\n {\n id: 'first-panel',\n label: 'First panel',\n render: node => {\n node.innerHTML = '

My custom panel

This is a custom panel that I added using a JavaScript plugin

';\n }\n }\n ]\n }\n });\n}); \n When a page with a table loads, all registered plugins that implement makeAboveTablePanelConfigs() will be called and panels they return will be added to the top of the table page.", "breadcrumbs": "[\"JavaScript plugins\", \"JavaScript plugin objects\"]", "references": "[]"} {"id": "javascript_plugins:javascript-plugins-makecolumnactions", "page": "javascript_plugins", "ref": "javascript-plugins-makecolumnactions", "title": "makeColumnActions(columnDetails)", "content": "This method, if present, will be called when Datasette is rendering the cog action menu icons that appear at the top of the table view. By default these include options like \"Sort ascending/descending\" and \"Facet by this\", but plugins can return additional actions to be included in this menu. \n The method will be called with a columnDetails object with the following keys: \n \n \n columnName - string \n \n The name of the column \n \n \n \n columnNotNull - boolean \n \n True if the column is defined as NOT NULL \n \n \n \n columnType - string \n \n The SQLite data type of the column \n \n \n \n isPk - boolean \n \n True if the column is part of the primary key \n \n \n \n It should return a JavaScript array of objects each with a label and onClick property: \n \n \n label - string \n \n The human-readable label for the action \n \n \n \n onClick(evt) - function \n \n A function that will be called when the action is clicked \n \n \n \n The evt object passed to the onClick is the standard browser event object that triggered the click. \n This example plugin adds two menu items - one to copy the column name to the clipboard and another that displays the column metadata in an alert() window: \n document.addEventListener('datasette_init', function(ev) {\n ev.detail.registerPlugin('column-name-plugin', {\n version: 0.1,\n makeColumnActions: (columnDetails) => {\n return [\n {\n label: 'Copy column to clipboard',\n onClick: async (evt) => {\n await navigator.clipboard.writeText(columnDetails.columnName)\n }\n },\n {\n label: 'Alert column metadata',\n onClick: () => alert(JSON.stringify(columnDetails, null, 2))\n }\n ];\n }\n });\n});", "breadcrumbs": "[\"JavaScript plugins\", \"JavaScript plugin objects\"]", "references": "[]"} {"id": "json_api:column-filter-arguments", "page": "json_api", "ref": "column-filter-arguments", "title": "Column filter arguments", "content": "You can filter the data returned by the table based on column values using a query string argument. \n \n \n ?column__exact=value or ?_column=value \n \n Returns rows where the specified column exactly matches the value. \n \n \n \n ?column__not=value \n \n Returns rows where the column does not match the value. \n \n \n \n ?column__contains=value \n \n Rows where the string column contains the specified value ( column like \"%value%\" in SQL). \n \n \n \n ?column__notcontains=value \n \n Rows where the string column does not contain the specified value ( column not like \"%value%\" in SQL). \n \n \n \n ?column__endswith=value \n \n Rows where the string column ends with the specified value ( column like \"%value\" in SQL). \n \n \n \n ?column__startswith=value \n \n Rows where the string column starts with the specified value ( column like \"value%\" in SQL). \n \n \n \n ?column__gt=value \n \n Rows which are greater than the specified value. \n \n \n \n ?column__gte=value \n \n Rows which are greater than or equal to the specified value. \n \n \n \n ?column__lt=value \n \n Rows which are less than the specified value. \n \n \n \n ?column__lte=value \n \n Rows which are less than or equal to the specified value. \n \n \n \n ?column__like=value \n \n Match rows with a LIKE clause, case insensitive and with % as the wildcard character. \n \n \n \n ?column__notlike=value \n \n Match rows that do not match the provided LIKE clause. \n \n \n \n ?column__glob=value \n \n Similar to LIKE but uses Unix wildcard syntax and is case sensitive. \n \n \n \n ?column__in=value1,value2,value3 \n \n Rows where column matches any of the provided values. \n You can use a comma separated string, or you can use a JSON array. \n The JSON array option is useful if one of your matching values itself contains a comma: \n ?column__in=[\"value\",\"value,with,commas\"] \n \n \n \n ?column__notin=value1,value2,value3 \n \n Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays. \n \n \n \n ?column__arraycontains=value \n \n Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__arraynotcontains=value \n \n Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value. \n This is only available if the json1 SQLite extension is enabled. \n \n \n \n ?column__date=value \n \n Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 . \n \n \n \n ?column__isnull=1 \n \n Matches rows where the column is null. \n \n \n \n ?column__notnull=1 \n \n Matches rows where the column is not null. \n \n \n \n ?column__isblank=1 \n \n Matches rows where the column is blank, meaning null or the empty string. \n \n \n \n ?column__notblank=1 \n \n Matches rows where the column is not blank.", "breadcrumbs": "[\"JSON API\", \"Table arguments\"]", "references": "[]"} {"id": "json_api:expand-foreign-keys", "page": "json_api", "ref": "expand-foreign-keys", "title": "Expanding foreign key references", "content": "Datasette can detect foreign key relationships and resolve those references into\n labels. The HTML interface does this by default for every detected foreign key\n column - you can turn that off using ?_labels=off . \n You can request foreign keys be expanded in JSON using the _labels=on or\n _label=COLUMN special query string parameters. Here's what an expanded row\n looks like: \n [\n {\n \"rowid\": 1,\n \"TreeID\": 141565,\n \"qLegalStatus\": {\n \"value\": 1,\n \"label\": \"Permitted Site\"\n },\n \"qSpecies\": {\n \"value\": 1,\n \"label\": \"Myoporum laetum :: Myoporum\"\n },\n \"qAddress\": \"501X Baker St\",\n \"SiteOrder\": 1\n }\n] \n The column in the foreign key table that is used for the label can be specified\n in metadata.json - see Specifying the label column for a table .", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:id1", "page": "json_api", "ref": "id1", "title": "JSON API", "content": "Datasette provides a JSON API for your SQLite databases. Anything you can do\n through the Datasette user interface can also be accessed as JSON via the API. \n To access the API for a page, either click on the .json link on that page or\n edit the URL and add a .json extension to it.", "breadcrumbs": "[]", "references": "[]"} {"id": "json_api:id2", "page": "json_api", "ref": "id2", "title": "Table arguments", "content": "The Datasette table view takes a number of special query string arguments.", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-cors", "page": "json_api", "ref": "json-api-cors", "title": "Enabling CORS", "content": "If you start Datasette with the --cors option, each JSON endpoint will be\n served with the following additional HTTP headers: \n [[[cog\nfrom datasette.utils import add_cors_headers\nimport textwrap\nheaders = {}\nadd_cors_headers(headers)\noutput = \"\\n\".join(\"{}: {}\".format(k, v) for k, v in headers.items())\ncog.out(\"\\n::\\n\\n\")\ncog.out(textwrap.indent(output, ' '))\ncog.out(\"\\n\\n\") \n ]]] \n Access-Control-Allow-Origin: *\nAccess-Control-Allow-Headers: Authorization, Content-Type\nAccess-Control-Expose-Headers: Link\nAccess-Control-Allow-Methods: GET, POST, HEAD, OPTIONS\nAccess-Control-Max-Age: 3600 \n [[[end]]] \n This allows JavaScript running on any domain to make cross-origin\n requests to interact with the Datasette API. \n If you start Datasette without the --cors option only JavaScript running on\n the same domain as Datasette will be able to access the API. \n Here's how to serve data.db with CORS enabled: \n datasette data.db --cors", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-default", "page": "json_api", "ref": "json-api-default", "title": "Default representation", "content": "The default JSON representation of data from a SQLite table or custom query\n looks like this: \n {\n \"ok\": true,\n \"rows\": [\n {\n \"id\": 3,\n \"name\": \"Detroit\"\n },\n {\n \"id\": 2,\n \"name\": \"Los Angeles\"\n },\n {\n \"id\": 4,\n \"name\": \"Memnonia\"\n },\n {\n \"id\": 1,\n \"name\": \"San Francisco\"\n }\n ],\n \"truncated\": false\n} \n \"ok\" is always true if an error did not occur. \n The \"rows\" key is a list of objects, each one representing a row. \n The \"truncated\" key lets you know if the query was truncated. This can happen if a SQL query returns more than 1,000 results (or the max_returned_rows setting). \n For table pages, an additional key \"next\" may be present. This indicates that the next page in the pagination set can be retrieved using ?_next=VALUE .", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-discover-alternate", "page": "json_api", "ref": "json-api-discover-alternate", "title": "Discovering the JSON for a page", "content": "Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. \n You can find this near the top of the source code of those pages, looking like this: \n \n The JSON URL is also made available in a Link HTTP header for the page: \n Link: https://latest.datasette.io/fixtures/sortable.json; rel=\"alternate\"; type=\"application/json+datasette\"", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-shapes", "page": "json_api", "ref": "json-api-shapes", "title": "Different shapes", "content": "The _shape parameter can be used to access alternative formats for the\n rows key which may be more convenient for your application. There are three\n options: \n \n \n ?_shape=objects - \"rows\" is a list of JSON key/value objects - the default \n \n \n ?_shape=arrays - \"rows\" is a list of lists, where the order of values in each list matches the order of the columns \n \n \n ?_shape=array - a JSON array of objects - effectively just the \"rows\" key from the default representation \n \n \n ?_shape=array&_nl=on - a newline-separated list of JSON objects \n \n \n ?_shape=arrayfirst - a flat JSON array containing just the first value from each row \n \n \n ?_shape=object - a JSON object keyed using the primary keys of the rows \n \n \n _shape=arrays looks like this: \n {\n \"ok\": true,\n \"next\": null,\n \"rows\": [\n [3, \"Detroit\"],\n [2, \"Los Angeles\"],\n [4, \"Memnonia\"],\n [1, \"San Francisco\"]\n ]\n} \n _shape=array looks like this: \n [\n {\n \"id\": 3,\n \"name\": \"Detroit\"\n },\n {\n \"id\": 2,\n \"name\": \"Los Angeles\"\n },\n {\n \"id\": 4,\n \"name\": \"Memnonia\"\n },\n {\n \"id\": 1,\n \"name\": \"San Francisco\"\n }\n] \n _shape=array&_nl=on looks like this: \n {\"id\": 1, \"value\": \"Myoporum laetum :: Myoporum\"}\n{\"id\": 2, \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"}\n{\"id\": 3, \"value\": \"Pinus radiata :: Monterey Pine\"} \n _shape=arrayfirst looks like this: \n [1, 2, 3] \n _shape=object looks like this: \n {\n \"1\": {\n \"id\": 1,\n \"value\": \"Myoporum laetum :: Myoporum\"\n },\n \"2\": {\n \"id\": 2,\n \"value\": \"Metrosideros excelsa :: New Zealand Xmas Tree\"\n },\n \"3\": {\n \"id\": 3,\n \"value\": \"Pinus radiata :: Monterey Pine\"\n }\n] \n The object shape is only available for queries against tables - custom SQL\n queries and views do not have an obvious primary key so cannot be returned using\n this format. \n The object keys are always strings. If your table has a compound primary\n key, the object keys will be a comma-separated string.", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:json-api-write", "page": "json_api", "ref": "json-api-write", "title": "The JSON write API", "content": "Datasette provides a write API for JSON data. This is a POST-only API that requires an authenticated API token, see API Tokens . The token will need to have the specified Permissions .", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "json_api:rowdeleteview", "page": "json_api", "ref": "rowdeleteview", "title": "Deleting a row", "content": "To delete a row, make a POST to ////-/delete . This requires the delete-row permission. \n POST //
//-/delete\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n here is the tilde-encoded primary key value of the row to delete - or a comma-separated list of primary key values if the table has a composite primary key. \n If successful, this will return a 200 status code and a {\"ok\": true} response body. \n Any errors will return {\"errors\": [\"... descriptive message ...\"], \"ok\": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "json_api:rowupdateview", "page": "json_api", "ref": "rowupdateview", "title": "Updating a row", "content": "To update a row, make a POST to //
//-/update . This requires the update-row permission. \n POST //
//-/update\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n {\n \"update\": {\n \"text_column\": \"New text string\",\n \"integer_column\": 3,\n \"float_column\": 3.14\n }\n} \n here is the tilde-encoded primary key value of the row to update - or a comma-separated list of primary key values if the table has a composite primary key. \n You only need to pass the columns you want to update. Any other columns will be left unchanged. \n If successful, this will return a 200 status code and a {\"ok\": true} response body. \n Add \"return\": true to the request body to return the updated row: \n {\n \"update\": {\n \"title\": \"New title\"\n },\n \"return\": true\n} \n The returned JSON will look like this: \n {\n \"ok\": true,\n \"row\": {\n \"id\": 1,\n \"title\": \"New title\",\n \"other_column\": \"Will be present here too\"\n }\n} \n Any errors will return {\"errors\": [\"... descriptive message ...\"], \"ok\": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error. \n Pass \"alter: true to automatically add any missing columns to the table. This requires the alter-table permission.", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "json_api:tablecreateview", "page": "json_api", "ref": "tablecreateview", "title": "Creating a table", "content": "To create a table, make a POST to //-/create . This requires the create-table permission. \n POST //-/create\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n {\n \"table\": \"name_of_new_table\",\n \"columns\": [\n {\n \"name\": \"id\",\n \"type\": \"integer\"\n },\n {\n \"name\": \"title\",\n \"type\": \"text\"\n }\n ],\n \"pk\": \"id\"\n} \n The JSON here describes the table that will be created: \n \n \n table is the name of the table to create. This field is required. \n \n \n columns is a list of columns to create. Each column is a dictionary with name and type keys. \n \n \n name is the name of the column. This is required. \n \n \n type is the type of the column. This is optional - if not provided, text will be assumed. The valid types are text , integer , float and blob . \n \n \n \n \n pk is the primary key for the table. This is optional - if not provided, Datasette will create a SQLite table with a hidden rowid column. \n If the primary key is an integer column, it will be configured to automatically increment for each new record. \n If you set this to id without including an id column in the list of columns , Datasette will create an auto-incrementing integer ID column for you. \n \n \n pks can be used instead of pk to create a compound primary key. It should be a JSON list of column names to use in that primary key. \n \n \n ignore can be set to true to ignore existing rows by primary key if the table already exists. \n \n \n replace can be set to true to replace existing rows by primary key if the table already exists. This requires the update-row permission. \n \n \n alter can be set to true if you want to automatically add any missing columns to the table. This requires the alter-table permission. \n \n \n If the table is successfully created this will return a 201 status code and the following response: \n {\n \"ok\": true,\n \"database\": \"data\",\n \"table\": \"name_of_new_table\",\n \"table_url\": \"http://127.0.0.1:8001/data/name_of_new_table\",\n \"table_api_url\": \"http://127.0.0.1:8001/data/name_of_new_table.json\",\n \"schema\": \"CREATE TABLE [name_of_new_table] (\\n [id] INTEGER PRIMARY KEY,\\n [title] TEXT\\n)\"\n}", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "json_api:tablecreateview-example", "page": "json_api", "ref": "tablecreateview-example", "title": "Creating a table from example data", "content": "Instead of specifying columns directly you can instead pass a single example row or a list of rows .\n Datasette will create a table with a schema that matches those rows and insert them for you: \n POST //-/create\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n {\n \"table\": \"creatures\",\n \"rows\": [\n {\n \"id\": 1,\n \"name\": \"Tarantula\"\n },\n {\n \"id\": 2,\n \"name\": \"K\u0101k\u0101p\u014d\"\n }\n ],\n \"pk\": \"id\"\n} \n Doing this requires both the create-table and insert-row permissions. \n The 201 response here will be similar to the columns form, but will also include the number of rows that were inserted as row_count : \n {\n \"ok\": true,\n \"database\": \"data\",\n \"table\": \"creatures\",\n \"table_url\": \"http://127.0.0.1:8001/data/creatures\",\n \"table_api_url\": \"http://127.0.0.1:8001/data/creatures.json\",\n \"schema\": \"CREATE TABLE [creatures] (\\n [id] INTEGER PRIMARY KEY,\\n [name] TEXT\\n)\",\n \"row_count\": 2\n} \n You can call the create endpoint multiple times for the same table provided you are specifying the table using the rows or row option. New rows will be inserted into the table each time. This means you can use this API if you are unsure if the relevant table has been created yet. \n If you pass a row to the create endpoint with a primary key that already exists you will get an error that looks like this: \n {\n \"ok\": false,\n \"errors\": [\n \"UNIQUE constraint failed: creatures.id\"\n ]\n} \n You can avoid this error by passing the same \"ignore\": true or \"replace\": true options to the create endpoint as you can to the insert endpoint . \n To use the \"replace\": true option you will also need the update-row permission. \n Pass \"alter\": true to automatically add any missing columns to the existing table that are present in the rows you are submitting. This requires the alter-table permission.", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "json_api:tabledropview", "page": "json_api", "ref": "tabledropview", "title": "Dropping tables", "content": "To drop a table, make a POST to //
/-/drop . This requires the drop-table permission. \n POST //
/-/drop\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n Without a POST body this will return a status 200 with a note about how many rows will be deleted: \n {\n \"ok\": true,\n \"database\": \"\",\n \"table\": \"
\",\n \"row_count\": 5,\n \"message\": \"Pass \\\"confirm\\\": true to confirm\"\n} \n If you pass the following POST body: \n {\n \"confirm\": true\n} \n Then the table will be dropped and a status 200 response of {\"ok\": true} will be returned. \n Any errors will return {\"errors\": [\"... descriptive message ...\"], \"ok\": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "json_api:tableinsertview", "page": "json_api", "ref": "tableinsertview", "title": "Inserting rows", "content": "This requires the insert-row permission. \n A single row can be inserted using the \"row\" key: \n POST //
/-/insert\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n {\n \"row\": {\n \"column1\": \"value1\",\n \"column2\": \"value2\"\n }\n} \n If successful, this will return a 201 status code and the newly inserted row, for example: \n {\n \"rows\": [\n {\n \"id\": 1,\n \"column1\": \"value1\",\n \"column2\": \"value2\"\n }\n ]\n} \n To insert multiple rows at a time, use the same API method but send a list of dictionaries as the \"rows\" key: \n POST //
/-/insert\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n {\n \"rows\": [\n {\n \"column1\": \"value1\",\n \"column2\": \"value2\"\n },\n {\n \"column1\": \"value3\",\n \"column2\": \"value4\"\n }\n ]\n} \n If successful, this will return a 201 status code and a {\"ok\": true} response body. \n The maximum number rows that can be submitted at once defaults to 100, but this can be changed using the max_insert_rows setting. \n To return the newly inserted rows, add the \"return\": true key to the request body: \n {\n \"rows\": [\n {\n \"column1\": \"value1\",\n \"column2\": \"value2\"\n },\n {\n \"column1\": \"value3\",\n \"column2\": \"value4\"\n }\n ],\n \"return\": true\n} \n This will return the same \"rows\" key as the single row example above. There is a small performance penalty for using this option. \n If any of your rows have a primary key that is already in use, you will get an error and none of the rows will be inserted: \n {\n \"ok\": false,\n \"errors\": [\n \"UNIQUE constraint failed: new_table.id\"\n ]\n} \n Pass \"ignore\": true to ignore these errors and insert the other rows: \n {\n \"rows\": [\n {\n \"id\": 1,\n \"column1\": \"value1\",\n \"column2\": \"value2\"\n },\n {\n \"id\": 2,\n \"column1\": \"value3\",\n \"column2\": \"value4\"\n }\n ],\n \"ignore\": true\n} \n Or you can pass \"replace\": true to replace any rows with conflicting primary keys with the new values. This requires the update-row permission. \n Pass \"alter: true to automatically add any missing columns to the table. This requires the alter-table permission.", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "json_api:tableupsertview", "page": "json_api", "ref": "tableupsertview", "title": "Upserting rows", "content": "An upsert is an insert or update operation. If a row with a matching primary key already exists it will be updated - otherwise a new row will be inserted. \n The upsert API is mostly the same shape as the insert API . It requires both the insert-row and update-row permissions. \n POST //
/-/upsert\nContent-Type: application/json\nAuthorization: Bearer dstok_ \n {\n \"rows\": [\n {\n \"id\": 1,\n \"title\": \"Updated title for 1\",\n \"description\": \"Updated description for 1\"\n },\n {\n \"id\": 2,\n \"description\": \"Updated description for 2\",\n },\n {\n \"id\": 3,\n \"title\": \"Item 3\",\n \"description\": \"Description for 3\"\n }\n ]\n} \n Imagine a table with a primary key of id and which already has rows with id values of 1 and 2 . \n The above example will: \n \n \n Update the row with id of 1 to set both title and description to the new values \n \n \n Update the row with id of 2 to set title to the new value - description will be left unchanged \n \n \n Insert a new row with id of 3 and both title and description set to the new values \n \n \n Similar to /-/insert , a row key with an object can be used instead of a rows array to upsert a single row. \n If successful, this will return a 200 status code and a {\"ok\": true} response body. \n Add \"return\": true to the request body to return full copies of the affected rows after they have been inserted or updated: \n {\n \"rows\": [\n {\n \"id\": 1,\n \"title\": \"Updated title for 1\",\n \"description\": \"Updated description for 1\"\n },\n {\n \"id\": 2,\n \"description\": \"Updated description for 2\",\n },\n {\n \"id\": 3,\n \"title\": \"Item 3\",\n \"description\": \"Description for 3\"\n }\n ],\n \"return\": true\n} \n This will return the following: \n {\n \"ok\": true,\n \"rows\": [\n {\n \"id\": 1,\n \"title\": \"Updated title for 1\",\n \"description\": \"Updated description for 1\"\n },\n {\n \"id\": 2,\n \"title\": \"Item 2\",\n \"description\": \"Updated description for 2\"\n },\n {\n \"id\": 3,\n \"title\": \"Item 3\",\n \"description\": \"Description for 3\"\n }\n ]\n} \n When using upsert you must provide the primary key column (or columns if the table has a compound primary key) for every row, or you will get a 400 error: \n {\n \"ok\": false,\n \"errors\": [\n \"Row 0 is missing primary key column(s): \\\"id\\\"\"\n ]\n} \n If your table does not have an explicit primary key you should pass the SQLite rowid key instead. \n Pass \"alter: true to automatically add any missing columns to the table. This requires the alter-table permission.", "breadcrumbs": "[\"JSON API\", \"The JSON write API\"]", "references": "[]"} {"id": "metadata:database-level-metadata", "page": "metadata", "ref": "database-level-metadata", "title": "Database-level metadata", "content": "\"Database-level\" metadata refers to fields that can be specified for each database in a Datasette instance. These attributes should be listed under a database inside the \"databases\" field. \n The following are the full list of allowed database-level metadata fields: \n \n \n source \n \n \n source_url \n \n \n license \n \n \n license_url \n \n \n about \n \n \n about_url", "breadcrumbs": "[\"Metadata\", \"Metadata reference\"]", "references": "[]"} {"id": "metadata:id1", "page": "metadata", "ref": "id1", "title": "Metadata", "content": "Data loves metadata. Any time you run Datasette you can optionally include a\n YAML or JSON file with metadata about your databases and tables. Datasette will then\n display that information in the web UI. \n Run Datasette like this: \n datasette database1.db database2.db --metadata metadata.yaml \n Your metadata.yaml file can look something like this: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"title\": \"Custom title for your index page\",\n \"description\": \"Some description text can go here\",\n \"license\": \"ODbL\",\n \"license_url\": \"https://opendatacommons.org/licenses/odbl/\",\n \"source\": \"Original Data Source\",\n \"source_url\": \"http://example.com/\"\n}) \n ]]] \n [[[end]]] \n Choosing YAML over JSON adds support for multi-line strings and comments. \n The above metadata will be displayed on the index page of your Datasette-powered\n site. The source and license information will also be included in the footer of\n every page served by Datasette. \n Any special HTML characters in description will be escaped. If you want to\n include HTML in your description, you can use a description_html property\n instead.", "breadcrumbs": "[]", "references": "[]"} {"id": "metadata:id2", "page": "metadata", "ref": "id2", "title": "Metadata reference", "content": "A full reference of every supported option in a metadata.json or metadata.yaml file.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:label-columns", "page": "metadata", "ref": "label-columns", "title": "Specifying the label column for a table", "content": "Datasette's HTML interface attempts to display foreign key references as\n labelled hyperlinks. By default, it looks for referenced tables that only have\n two columns: a primary key column and one other. It assumes that the second\n column should be used as the link label. \n If your table has more than two columns you can specify which column should be\n used for the link label with the label_column property: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"label_column\": \"title\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-default-sort", "page": "metadata", "ref": "metadata-default-sort", "title": "Setting a default sort order", "content": "By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the \"sort\" or \"sort_desc\" metadata properties: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort\": \"created\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Or use \"sort_desc\" to sort in descending order: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"sort_desc\": \"created\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-hiding-tables", "page": "metadata", "ref": "metadata-hiding-tables", "title": "Hiding tables", "content": "You can hide tables from the database listing view (in the same way that FTS and\n SpatiaLite tables are automatically hidden) using \"hidden\": true : \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"hidden\": True\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-page-size", "page": "metadata", "ref": "metadata-page-size", "title": "Setting a custom page size", "content": "Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the \"size\" key in metadata.json : \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"mydatabase\": {\n \"tables\": {\n \"example_table\": {\n \"size\": 10\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n This size can still be over-ridden by passing e.g. ?_size=50 in the query string.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-sortable-columns", "page": "metadata", "ref": "metadata-sortable-columns", "title": "Setting which columns can be used for sorting", "content": "Datasette allows any column to be used for sorting by default. If you need to\n control which columns are available for sorting you can do so using the optional\n sortable_columns key: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"database1\": {\n \"tables\": {\n \"example_table\": {\n \"sortable_columns\": [\n \"height\",\n \"weight\"\n ]\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n This will restrict sorting of example_table to just the height and\n weight columns. \n You can also disable sorting entirely by setting \"sortable_columns\": [] \n You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"my_database\": {\n \"tables\": {\n \"name_of_view\": {\n \"sortable_columns\": [\n \"clicks\",\n \"impressions\"\n ]\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:metadata-source-license-about", "page": "metadata", "ref": "metadata-source-license-about", "title": "Source, license and about", "content": "The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional. \n source and source_url should be used to indicate where the underlying data came from. \n license and license_url should be used to indicate the license under which the data can be used. \n about and about_url can be used to link to further information about the project - an accompanying blog entry for example. \n For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:per-database-and-per-table-metadata", "page": "metadata", "ref": "per-database-and-per-table-metadata", "title": "Per-database and per-table metadata", "content": "Metadata at the top level of the file will be shown on the index page and in the\n footer on every page of the site. The license and source is expected to apply to\n all of your data. \n You can also provide metadata at the per-database or per-table level, like this: \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"database1\": {\n \"source\": \"Alternative source\",\n \"source_url\": \"http://example.com/\",\n \"tables\": {\n \"example_table\": {\n \"description_html\": \"Custom table description\",\n \"license\": \"CC BY 3.0 US\",\n \"license_url\": \"https://creativecommons.org/licenses/by/3.0/us/\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Each of the top-level metadata fields can be used at the database and table level.", "breadcrumbs": "[\"Metadata\"]", "references": "[]"} {"id": "metadata:table-level-metadata", "page": "metadata", "ref": "table-level-metadata", "title": "Table-level metadata", "content": "\"Table-level\" metadata refers to fields that can be specified for each table in a Datasette instance. These attributes should be listed under a specific table using the \"tables\" field. \n The following are the full list of allowed table-level metadata fields: \n \n \n source \n \n \n source_url \n \n \n license \n \n \n license_url \n \n \n about \n \n \n about_url \n \n \n hidden \n \n \n sort/sort_desc \n \n \n size \n \n \n sortable_columns \n \n \n label_column \n \n \n facets \n \n \n fts_table \n \n \n fts_pk \n \n \n searchmode \n \n \n columns", "breadcrumbs": "[\"Metadata\", \"Metadata reference\"]", "references": "[]"} {"id": "metadata:top-level-metadata", "page": "metadata", "ref": "top-level-metadata", "title": "Top-level metadata", "content": "\"Top-level\" metadata refers to fields that can be specified at the root level of a metadata file. These attributes are meant to describe the entire Datasette instance. \n The following are the full list of allowed top-level metadata fields: \n \n \n title \n \n \n description \n \n \n description_html \n \n \n license \n \n \n license_url \n \n \n source \n \n \n source_url", "breadcrumbs": "[\"Metadata\", \"Metadata reference\"]", "references": "[]"} {"id": "pages:databaseview-hidden", "page": "pages", "ref": "databaseview-hidden", "title": "Hidden tables", "content": "Some tables listed on the database page are treated as hidden. Hidden tables are not completely invisible - they can be accessed through the \"hidden tables\" link at the bottom of the page. They are hidden because they represent low-level implementation details which are generally not useful to end-users of Datasette. \n The following tables are hidden by default: \n \n \n Any table with a name that starts with an underscore - this is a Datasette convention to help plugins easily hide their own internal tables. \n \n \n Tables that have been configured as \"hidden\": true using Hiding tables . \n \n \n *_fts tables that implement SQLite full-text search indexes. \n \n \n Tables relating to the inner workings of the SpatiaLite SQLite extension. \n \n \n sqlite_stat tables used to store statistics used by the query optimizer.", "breadcrumbs": "[\"Pages and API endpoints\", \"Database\"]", "references": "[]"} {"id": "pages:pages", "page": "pages", "ref": "pages", "title": "Pages and API endpoints", "content": "The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.", "breadcrumbs": "[]", "references": "[]"} {"id": "performance:performance", "page": "performance", "ref": "performance", "title": "Performance and caching", "content": "Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. \n That said, there are a number of tricks you can use to improve Datasette's performance.", "breadcrumbs": "[]", "references": "[]"} {"id": "performance:performance-immutable-mode", "page": "performance", "ref": "performance-immutable-mode", "title": "Immutable mode", "content": "If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode . \n Doing so will disable all locking and change detection, which can result in improved query performance. \n This also enables further optimizations relating to HTTP caching, described below. \n To open a file in immutable mode pass it to the datasette command using the -i option: \n datasette -i data.db \n When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "performance:performance-inspect", "page": "performance", "ref": "performance-inspect", "title": "Using \"datasette inspect\"", "content": "Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more. \n If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server. \n To create a JSON file containing the calculated row counts for a database, use the following: \n datasette inspect data.db --inspect-file=counts.json \n Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup: \n datasette -i data.db --inspect-file=counts.json \n You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored. \n You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[]"} {"id": "plugin_hooks:plugin-actions", "page": "plugin_hooks", "ref": "plugin-actions", "title": "Action hooks", "content": "Action hooks can be used to add items to the action menus that appear at the top of different pages within Datasette. Unlike menu_links() , actions which are displayed on every page, actions should only be relevant to the page the user is currently viewing. \n Each of these hooks should return return a list of {\"href\": \"...\", \"label\": \"...\"} menu items, with optional \"description\": \"...\" keys describing each action in more detail. \n They can alternatively return an async def awaitable function which, when called, returns a list of those menu items.", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-event-tracking", "page": "plugin_hooks", "ref": "plugin-event-tracking", "title": "Event tracking", "content": "Datasette includes an internal mechanism for tracking notable events. This can be used for analytics, but can also be used by plugins that want to listen out for when key events occur (such as a table being created) and take action in response. \n Plugins can register to receive events using the track_event plugin hook. \n They can also define their own events for other plugins to receive using the register_events() plugin hook , combined with calls to the datasette.track_event() internal method .", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-forbidden", "page": "plugin_hooks", "ref": "plugin-hook-forbidden", "title": "forbidden(datasette, request, message)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to render templates or execute SQL queries. \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n message - string \n \n A message hinting at why the request was forbidden. \n \n \n \n Plugins can use this to customize how Datasette responds when a 403 Forbidden error occurs - usually because a page failed a permission check, see Permissions . \n If a plugin hook wishes to react to the error, it should return a Response object . \n This example returns a redirect to a /-/login page: \n from datasette import hookimpl\nfrom urllib.parse import urlencode\n\n\n@hookimpl\ndef forbidden(request, message):\n return Response.redirect(\n \"/-/login?=\" + urlencode({\"message\": message})\n ) \n The function can alternatively return an awaitable function if it needs to make any asynchronous method calls. This example renders a template: \n from datasette import hookimpl, Response\n\n\n@hookimpl\ndef forbidden(datasette):\n async def inner():\n return Response.html(\n await datasette.render_template(\n \"render_message.html\", request=request\n )\n )\n\n return inner", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-homepage-actions", "page": "plugin_hooks", "ref": "plugin-hook-homepage-actions", "title": "homepage_actions(datasette, actor, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n Populates an actions menu on the top-level index homepage of the Datasette instance. \n This example adds a link an imagined tool for editing the homepage, only for signed in users: \n from datasette import hookimpl\n\n\n@hookimpl\ndef homepage_actions(datasette, actor):\n if actor:\n return [\n {\n \"href\": datasette.urls.path(\n \"/-/customize-homepage\"\n ),\n \"label\": \"Customize homepage\",\n }\n ]", "breadcrumbs": "[\"Plugin hooks\", \"Action hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-register-events", "page": "plugin_hooks", "ref": "plugin-hook-register-events", "title": "register_events(datasette)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n This hook should return a list of Event subclasses that represent custom events that the plugin might send to the datasette.track_event() method. \n This example registers event subclasses for ban-user and unban-user events: \n from dataclasses import dataclass\nfrom datasette import hookimpl, Event\n\n\n@dataclass\nclass BanUserEvent(Event):\n name = \"ban-user\"\n user: dict\n\n\n@dataclass\nclass UnbanUserEvent(Event):\n name = \"unban-user\"\n user: dict\n\n\n@hookimpl\ndef register_events():\n return [BanUserEvent, UnbanUserEvent] \n The plugin can then call datasette.track_event(...) to send a ban-user event: \n await datasette.track_event(\n BanUserEvent(user={\"id\": 1, \"username\": \"cleverbot\"})\n)", "breadcrumbs": "[\"Plugin hooks\", \"Event tracking\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-register-magic-parameters", "page": "plugin_hooks", "ref": "plugin-hook-register-magic-parameters", "title": "register_magic_parameters(datasette)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n Magic parameters can be used to add automatic parameters to canned queries . This plugin hook allows additional magic parameters to be defined by plugins. \n Magic parameters all take this format: _prefix_rest_of_parameter . The prefix indicates which magic parameter function should be called - the rest of the parameter is passed as an argument to that function. \n To register a new function, return it as a tuple of (string prefix, function) from this hook. The function you register should take two arguments: key and request , where key is the rest_of_parameter portion of the parameter and request is the current Request object . \n This example registers two new magic parameters: :_request_http_version returning the HTTP version of the current request, and :_uuid_new which returns a new UUID: \n from datasette import hookimpl\nfrom uuid import uuid4\n\n\ndef uuid(key, request):\n if key == \"new\":\n return str(uuid4())\n else:\n raise KeyError\n\n\ndef request(key, request):\n if key == \"http_version\":\n return request.scope[\"http_version\"]\n else:\n raise KeyError\n\n\n@hookimpl\ndef register_magic_parameters(datasette):\n return [\n (\"request\", request),\n (\"uuid\", uuid),\n ]", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-top-canned-query", "page": "plugin_hooks", "ref": "plugin-hook-top-canned-query", "title": "top_canned_query(datasette, request, database, query_name)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n query_name - string \n \n The name of the canned query. \n \n \n \n Returns HTML to be displayed at the top of the canned query page.", "breadcrumbs": "[\"Plugin hooks\", \"Template slots\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-top-database", "page": "plugin_hooks", "ref": "plugin-hook-top-database", "title": "top_database(datasette, request, database)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n Returns HTML to be displayed at the top of the database page.", "breadcrumbs": "[\"Plugin hooks\", \"Template slots\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-top-homepage", "page": "plugin_hooks", "ref": "plugin-hook-top-homepage", "title": "top_homepage(datasette, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n Returns HTML to be displayed at the top of the Datasette homepage.", "breadcrumbs": "[\"Plugin hooks\", \"Template slots\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-top-query", "page": "plugin_hooks", "ref": "plugin-hook-top-query", "title": "top_query(datasette, request, database, sql)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n sql - string \n \n The SQL query. \n \n \n \n Returns HTML to be displayed at the top of the query results page.", "breadcrumbs": "[\"Plugin hooks\", \"Template slots\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-top-row", "page": "plugin_hooks", "ref": "plugin-hook-top-row", "title": "top_row(datasette, request, database, table, row)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string \n \n The name of the table. \n \n \n \n row - sqlite.Row \n \n The SQLite row object being displayed. \n \n \n \n Returns HTML to be displayed at the top of the row page.", "breadcrumbs": "[\"Plugin hooks\", \"Template slots\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-top-table", "page": "plugin_hooks", "ref": "plugin-hook-top-table", "title": "top_table(datasette, request, database, table)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . \n \n \n \n request - Request object \n \n The current HTTP request. \n \n \n \n database - string \n \n The name of the database. \n \n \n \n table - string \n \n The name of the table. \n \n \n \n Returns HTML to be displayed at the top of the table page.", "breadcrumbs": "[\"Plugin hooks\", \"Template slots\"]", "references": "[]"} {"id": "plugin_hooks:plugin-hook-view-actions", "page": "plugin_hooks", "ref": "plugin-hook-view-actions", "title": "view_actions(datasette, actor, database, view, request)", "content": "datasette - Datasette class \n \n You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. \n \n \n \n actor - dictionary or None \n \n The currently authenticated actor . \n \n \n \n database - string \n \n The name of the database. \n \n \n \n view - string \n \n The name of the SQL view. \n \n \n \n request - Request object or None \n \n The current HTTP request. This can be None if the request object is not available. \n \n \n \n Like table_actions(datasette, actor, database, table, request) but for SQL views.", "breadcrumbs": "[\"Plugin hooks\", \"Action hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-page-extras", "page": "plugin_hooks", "ref": "plugin-page-extras", "title": "Page extras", "content": "These plugin hooks can be used to affect the way HTML pages for different Datasette interfaces are rendered.", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugin_hooks:plugin-register-permissions", "page": "plugin_hooks", "ref": "plugin-register-permissions", "title": "register_permissions(datasette)", "content": "If your plugin needs to register additional permissions unique to that plugin - upload-csvs for example - you can return a list of those permissions from this hook. \n from datasette import hookimpl, Permission\n\n\n@hookimpl\ndef register_permissions(datasette):\n return [\n Permission(\n name=\"upload-csvs\",\n abbr=None,\n description=\"Upload CSV files\",\n takes_database=True,\n takes_resource=False,\n default=False,\n )\n ] \n The fields of the Permission class are as follows: \n \n \n name - string \n \n The name of the permission, e.g. upload-csvs . This should be unique across all plugins that the user might have installed, so choose carefully. \n \n \n \n abbr - string or None \n \n An abbreviation of the permission, e.g. uc . This is optional - you can set it to None if you do not want to pick an abbreviation. Since this needs to be unique across all installed plugins it's best not to specify an abbreviation at all. If an abbreviation is provided it will be used when creating restricted signed API tokens. \n \n \n \n description - string or None \n \n A human-readable description of what the permission lets you do. Should make sense as the second part of a sentence that starts \"A user with this permission can ...\". \n \n \n \n takes_database - boolean \n \n True if this permission can be granted on a per-database basis, False if it is only valid at the overall Datasette instance level. \n \n \n \n takes_resource - boolean \n \n True if this permission can be granted on a per-resource basis. A resource is a database table, SQL view or canned query . \n \n \n \n default - boolean \n \n The default value for this permission if it is not explicitly granted to a user. True means the permission is granted by default, False means it is not. \n This should only be True if you want anonymous users to be able to take this action.", "breadcrumbs": "[\"Plugin hooks\"]", "references": "[]"} {"id": "plugins:deploying-plugins-using-datasette-publish", "page": "plugins", "ref": "deploying-plugins-using-datasette-publish", "title": "Deploying plugins using datasette publish", "content": "The datasette publish and datasette package commands both take an optional --install argument. You can use this one or more times to tell Datasette to pip install specific plugins as part of the process: \n datasette publish cloudrun mydb.db --install=datasette-vega \n You can use the name of a package on PyPI or any of the other valid arguments to pip install such as a URL to a .zip file: \n datasette publish cloudrun mydb.db \\\n --install=https://url-to-my-package.zip", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"} {"id": "plugins:one-off-plugins-using-plugins-dir", "page": "plugins", "ref": "one-off-plugins-using-plugins-dir", "title": "One-off plugins using --plugins-dir", "content": "You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option: \n datasette mydb.db --plugins-dir=plugins/", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"} {"id": "plugins:plugins-configuration", "page": "plugins", "ref": "plugins-configuration", "title": "Plugin configuration", "content": "Plugins can have their own configuration, embedded in a configuration file . Configuration options for plugins live within a \"plugins\" key in that file, which can be included at the root, database or table level. \n Here is an example of some plugin configuration for a specific table: \n [[[cog\nfrom metadata_doc import config_example\nconfig_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"plugins\": {\n \"datasette-cluster-map\": {\n \"latitude_column\": \"lat\",\n \"longitude_column\": \"lng\"\n }\n }\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n This tells the datasette-cluster-map column which latitude and longitude columns should be used for a table called Street_Tree_List inside a database file called sf-trees.db .", "breadcrumbs": "[\"Plugins\"]", "references": "[]"} {"id": "plugins:plugins-configuration-secret", "page": "plugins", "ref": "plugins-configuration-secret", "title": "Secret configuration values", "content": "Some plugins may need configuration that should stay secret - API keys for example. There are two ways in which you can store secret configuration values. \n As environment variables . If your secret lives in an environment variable that is available to the Datasette process, you can indicate that the configuration value should be read from that environment variable like so: \n [[[cog\nconfig_example(cog, {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$env\": \"GITHUB_CLIENT_SECRET\"\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n As values in separate files . Your secrets can also live in files on disk. To specify a secret should be read from a file, provide the full file path like this: \n [[[cog\nconfig_example(cog, {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_secret\": {\n \"$file\": \"/secrets/client-secret\"\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n If you are publishing your data using the datasette publish family of commands, you can use the --plugin-secret option to set these secrets at publish time. For example, using Heroku you might run the following command: \n datasette publish heroku my_database.db \\\n --name my-heroku-app-demo \\\n --install=datasette-auth-github \\\n --plugin-secret datasette-auth-github client_id your_client_id \\\n --plugin-secret datasette-auth-github client_secret your_client_secret \n This will set the necessary environment variables and add the following to the deployed metadata.yaml : \n [[[cog\nconfig_example(cog, {\n \"plugins\": {\n \"datasette-auth-github\": {\n \"client_id\": {\n \"$env\": \"DATASETTE_AUTH_GITHUB_CLIENT_ID\"\n },\n \"client_secret\": {\n \"$env\": \"DATASETTE_AUTH_GITHUB_CLIENT_SECRET\"\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Plugins\", \"Plugin configuration\"]", "references": "[]"} {"id": "plugins:plugins-datasette-load-plugins", "page": "plugins", "ref": "plugins-datasette-load-plugins", "title": "Controlling which plugins are loaded", "content": "Datasette defaults to loading every plugin that is installed in the same virtual environment as Datasette itself. \n You can set the DATASETTE_LOAD_PLUGINS environment variable to a comma-separated list of plugin names to load a controlled subset of plugins instead. \n For example, to load just the datasette-vega and datasette-cluster-map plugins, set DATASETTE_LOAD_PLUGINS to datasette-vega,datasette-cluster-map : \n export DATASETTE_LOAD_PLUGINS='datasette-vega,datasette-cluster-map'\ndatasette mydb.db \n Or: \n DATASETTE_LOAD_PLUGINS='datasette-vega,datasette-cluster-map' \\\n datasette mydb.db \n To disable the loading of all additional plugins, set DATASETTE_LOAD_PLUGINS to an empty string: \n export DATASETTE_LOAD_PLUGINS=''\ndatasette mydb.db \n A quick way to test this setting is to use it with the datasette plugins command: \n DATASETTE_LOAD_PLUGINS='datasette-vega' datasette plugins \n This should output the following: \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\",\n \"hooks\": [\n \"extra_css_urls\",\n \"extra_js_urls\"\n ]\n }\n]", "breadcrumbs": "[\"Plugins\"]", "references": "[]"} {"id": "plugins:plugins-installing", "page": "plugins", "ref": "plugins-installing", "title": "Installing plugins", "content": "If a plugin has been packaged for distribution using setuptools you can use the plugin by installing it alongside Datasette in the same virtual environment or Docker container. \n You can install plugins using the datasette install command: \n datasette install datasette-vega \n You can uninstall plugins with datasette uninstall : \n datasette uninstall datasette-vega \n You can upgrade plugins with datasette install --upgrade or datasette install -U : \n datasette install -U datasette-vega \n This command can also be used to upgrade Datasette itself to the latest released version: \n datasette install -U datasette \n You can install multiple plugins at once by listing them as lines in a requirements.txt file like this: \n datasette-vega\ndatasette-cluster-map \n Then pass that file to datasette install -r : \n datasette install -r requirements.txt \n The install and uninstall commands are thin wrappers around pip install and pip uninstall , which ensure that they run pip in the same virtual environment as Datasette itself.", "breadcrumbs": "[\"Plugins\"]", "references": "[]"} {"id": "publish:publishing", "page": "publish", "ref": "publishing", "title": "Publishing data", "content": "Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.", "breadcrumbs": "[]", "references": "[]"} {"id": "settings:config-dir", "page": "settings", "ref": "config-dir", "title": "Configuration directory mode", "content": "Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose: \n datasette one.db two.db \\\n --metadata=metadata.json \\\n --template-dir=templates/ \\\n --plugins-dir=plugins \\\n --static css:css \n As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure: \n # In a directory called my-app:\nmy-app/one.db\nmy-app/two.db\nmy-app/datasette.yaml\nmy-app/metadata.json\nmy-app/templates/index.html\nmy-app/plugins/my_plugin.py\nmy-app/static/my.css \n Now start Datasette by providing the path to that directory: \n datasette my-app/ \n Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db files that it finds, will load metadata.json if it exists, and will load the templates , plugins and static folders if they are present. \n The files that can be included in this directory are as follows. All are optional. \n \n \n *.db (or *.sqlite3 or *.sqlite ) - SQLite database files that will be served by Datasette \n \n \n datasette.yaml - Configuration for the Datasette instance \n \n \n metadata.json - Metadata for those databases - metadata.yaml or metadata.yml can be used as well \n \n \n inspect-data.json - the result of running datasette inspect *.db --inspect-file=inspect-data.json from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running \n \n \n templates/ - a directory containing Custom templates \n \n \n plugins/ - a directory containing plugins, see Writing one-off plugins \n \n \n static/ - a directory containing static files - these will be served from /static/filename.txt , see Serving static files", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "settings:id1", "page": "settings", "ref": "id1", "title": "Settings", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "settings:id2", "page": "settings", "ref": "id2", "title": "Settings", "content": "The following options can be set using --setting name value , or by storing them in the settings.json file for use with Configuration directory mode .", "breadcrumbs": "[\"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-csv-stream", "page": "settings", "ref": "setting-allow-csv-stream", "title": "allow_csv_stream", "content": "Enables the CSV export feature where an entire table\n (potentially hundreds of thousands of rows) can be exported as a single CSV\n file. This is turned on by default - you can turn it off like this: \n datasette mydatabase.db --setting allow_csv_stream off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-download", "page": "settings", "ref": "setting-allow-download", "title": "allow_download", "content": "Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download is on. To disable database downloads, use the following: \n datasette mydatabase.db --setting allow_download off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-facet", "page": "settings", "ref": "setting-allow-facet", "title": "allow_facet", "content": "Allow users to specify columns they would like to facet on using the ?_facet=COLNAME URL parameter to the table view. \n This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in metadata.json configuration for the table. \n Here's how to disable this feature: \n datasette mydatabase.db --setting allow_facet off", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-allow-signed-tokens", "page": "settings", "ref": "setting-allow-signed-tokens", "title": "allow_signed_tokens", "content": "Should users be able to create signed API tokens to access Datasette? \n This is turned on by default. Use the following to turn it off: \n datasette mydatabase.db --setting allow_signed_tokens off \n Turning this setting off will disable the /-/create-token page, described here . It will also cause any incoming Authorization: Bearer dstok_... API tokens to be ignored.", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-base-url", "page": "settings", "ref": "setting-base-url", "title": "base_url", "content": "If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance. \n For example, if you are sending traffic from https://www.example.com/tools/datasette/ through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/ as its root URL. \n You can do that like so: \n datasette mydatabase.db --setting base_url /tools/datasette/", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-allow-sql", "page": "settings", "ref": "setting-default-allow-sql", "title": "default_allow_sql", "content": "Should users be able to execute arbitrary SQL queries by default? \n Setting this to off causes permission checks for execute-sql to fail by default. \n datasette mydatabase.db --setting default_allow_sql off \n Another way to achieve this is to add \"allow_sql\": false to your datasette.yaml file, as described in Controlling the ability to execute arbitrary SQL . This setting offers a more convenient way to do this.", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-cache-ttl", "page": "settings", "ref": "setting-default-cache-ttl", "title": "default_cache_ttl", "content": "Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X . Can be over-ridden on a per-request basis using the ?_ttl= query string parameter. Set this to 0 to disable HTTP caching entirely. Defaults to 5 seconds. \n datasette mydatabase.db --setting default_cache_ttl 60", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-facet-size", "page": "settings", "ref": "setting-default-facet-size", "title": "default_facet_size", "content": "The default number of unique rows returned by Facets is 30. You can customize it like this: \n datasette mydatabase.db --setting default_facet_size 50", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-default-page-size", "page": "settings", "ref": "setting-default-page-size", "title": "default_page_size", "content": "The default number of rows returned by the table page. You can over-ride this on a per-page basis using the ?_size=80 query string parameter, provided you do not specify a value higher than the max_returned_rows setting. You can set this default using --setting like so: \n datasette mydatabase.db --setting default_page_size 50", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-facet-suggest-time-limit-ms", "page": "settings", "ref": "setting-facet-suggest-time-limit-ms", "title": "facet_suggest_time_limit_ms", "content": "When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet. \n You can increase this time limit like so: \n datasette mydatabase.db --setting facet_suggest_time_limit_ms 500", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-facet-time-limit-ms", "page": "settings", "ref": "setting-facet-time-limit-ms", "title": "facet_time_limit_ms", "content": "This is the time limit Datasette allows for calculating a facet, which defaults to 200ms: \n datasette mydatabase.db --setting facet_time_limit_ms 1000", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"} {"id": "settings:setting-force-https-urls", "page": "settings", "ref": "setting-force-https-urls", "title": "force_https_urls", "content": "Forces self-referential URLs in the JSON output to always use the https:// \n protocol. This is useful for cases where the application itself is hosted using\n HTTP but is served to the outside world via a proxy that enables HTTPS. \n datasette mydatabase.db --setting force_https_urls 1", "breadcrumbs": "[\"Settings\", \"Settings\"]", "references": "[]"}