id,page,ref,title,content,breadcrumbs,references
internals:datasette-resolve-database,internals,datasette-resolve-database,.resolve_database(request),"request - Request object
A request object
If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object.
This returns a Database instance.
If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested.","[""Internals for plugins"", ""Datasette class""]",[]
internals:datasette-resolve-row,internals,datasette-resolve-row,.resolve_row(request),"request - Request object
A request object
This method assumes your route declares named groups for database , table and pks .
It returns a ResolvedRow named tuple instance with the following fields:
db - Database
The database object
table - string
The name of the table
sql - string
SQL snippet that can be used in a WHERE clause to select the row
params - dict
Parameters that should be passed to the SQL query
pks - list
List of primary key column names
pk_values - list
List of primary key values decoded from the URL
row - sqlite3.Row
The row itself
If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception.
If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception.
If the row cannot be found it raises a datasette.utils.asgi.RowNotFound exception. This has .database_name , .table and .pk_values attributes, extracted from the request path.","[""Internals for plugins"", ""Datasette class""]",[]
internals:datasette-resolve-table,internals,datasette-resolve-table,.resolve_table(request),"request - Request object
A request object
This assumes that the regular expression for your route declares both a database and a table named group.
It returns a ResolvedTable named tuple instance with the following fields:
db - Database
The database object
table - string
The name of the table (or view)
is_view - boolean
True if this is a view, False if it is a table
If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception.
If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes.","[""Internals for plugins"", ""Datasette class""]",[]
internals:datasette-setting,internals,datasette-setting,.setting(key),"key - string
The name of the setting, e.g. base_url .
Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting.
For example:
downloads_are_allowed = datasette.setting(""allow_download"")","[""Internals for plugins"", ""Datasette class""]",[]
internals:datasette-track-event,internals,datasette-track-event,await .track_event(event),"event - Event
An instance of a subclass of datasette.events.Event .
Plugins can call this to track events, using classes they have previously registered. See Event tracking for details.
The event will then be passed to all plugins that have registered to receive events using the track_event(datasette, event) hook.
Example usage, assuming the plugin has previously registered the BanUserEvent class:
await datasette.track_event(
BanUserEvent(user={""id"": 1, ""username"": ""cleverbot""})
)","[""Internals for plugins"", ""Datasette class""]",[]
internals:datasette-unsign,internals,datasette-unsign,".unsign(value, namespace=""default"")","signed - any serializable type
The signed string that was created using .sign(value, namespace=""default"") .
namespace - string, optional
The alternative namespace, if one was used.
Returns the original, decoded object that was passed to .sign(value, namespace=""default"") . If the signature is not valid this raises a itsdangerous.BadSignature exception.","[""Internals for plugins"", ""Datasette class""]",[]
internals:id1,internals,id1,.get_internal_database(),Returns a database object for reading and writing to the private internal database .,"[""Internals for plugins"", ""Datasette class""]",[]
internals:internals,internals,internals,Internals for plugins,Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.,[],[]
internals:internals-database,internals,internals-database,Database class,"Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.","[""Internals for plugins""]",[]
internals:internals-database-introspection,internals,internals-database-introspection,Database introspection,"The Database class also provides properties and methods for introspecting the database.
db.name - string
The name of the database - usually the filename without the .db prefix.
db.size - integer
The size of the database file in bytes. 0 for :memory: databases.
db.mtime_ns - integer or None
The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases.
db.is_mutable - boolean
Is this database mutable, and allowed to accept writes?
db.is_memory - boolean
Is this database an in-memory database?
await db.attached_databases() - list of named tuples
Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file .
await db.table_exists(table) - boolean
Check if a table called table exists.
await db.view_exists(view) - boolean
Check if a view called view exists.
await db.table_names() - list of strings
List of names of tables in the database.
await db.view_names() - list of strings
List of names of views in the database.
await db.table_columns(table) - list of strings
Names of columns in a specific table.
await db.table_column_details(table) - list of named tuples
Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0).
await db.primary_keys(table) - list of strings
Names of the columns that are part of the primary key for this table.
await db.fts_table(table) - string or None
The name of the FTS table associated with this table, if one exists.
await db.label_column_for_table(table) - string or None
The label column that is associated with this table - either automatically detected or using the ""label_column"" key from Metadata , see Specifying the label column for a table .
await db.foreign_keys_for_table(table) - list of dictionaries
Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {""column"": string, ""other_table"": string, ""other_column"": string} .
await db.hidden_table_names() - list of strings
List of tables which Datasette ""hides"" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature.
await db.get_table_definition(table) - string
Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements.
await db.get_view_definition(view) - string
Returns the SQL definition of the named view.
await db.get_all_foreign_keys() - dictionary
Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, ""incoming"" and ""outgoing"" , each of which is a list of dictionaries with keys ""column"" , ""other_table"" and ""other_column"" . For example:
{
""incoming"": [],
""outgoing"": [
{
""other_table"": ""attraction_characteristic"",
""column"": ""characteristic_id"",
""other_column"": ""pk"",
},
{
""other_table"": ""roadside_attractions"",
""column"": ""attraction_id"",
""other_column"": ""pk"",
}
]
}","[""Internals for plugins"", ""Database class""]",[]
internals:internals-datasette,internals,internals-datasette,Datasette class,"This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette .
You can create your own instance of this - for example to help write tests for a plugin - like so:
from datasette.app import Datasette
# With no arguments a single in-memory database will be attached
datasette = Datasette()
# The files= argument can load files from disk
datasette = Datasette(files=[""/path/to/my-database.db""])
# Pass metadata as a JSON dictionary like this
datasette = Datasette(
files=[""/path/to/my-database.db""],
metadata={
""databases"": {
""my-database"": {
""description"": ""This is my database""
}
}
},
)
Constructor parameters include:
files=[...] - a list of database files to open
immutables=[...] - a list of database files to open in immutable mode
metadata={...} - a dictionary of Metadata
config_dir=... - the configuration directory to use, stored in datasette.config_dir","[""Internals for plugins""]",[]
internals:internals-datasette-urls,internals,internals-datasette-urls,datasette.urls,"The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect.
datasette.urls.instance(format=None)
Returns the URL to the Datasette instance root page. This is usually ""/"" .
datasette.urls.path(path, format=None)
Takes a path and returns the full path, taking base_url into account.
For example, datasette.urls.path(""-/logout"") will return the path to the logout page, which will be ""/-/logout"" by default or /prefix-path/-/logout if base_url is set to /prefix-path/
datasette.urls.logout()
Returns the URL to the logout page, usually ""/-/logout""
datasette.urls.static(path)
Returns the URL of one of Datasette's default static assets, for example ""/-/static/app.css""
datasette.urls.static_plugins(plugin_name, path)
Returns the URL of one of the static assets belonging to a plugin.
datasette.urls.static_plugins(""datasette_cluster_map"", ""datasette-cluster-map.js"") would return ""/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js""
datasette.urls.static(path)
Returns the URL of one of Datasette's default static assets, for example ""/-/static/app.css""
datasette.urls.database(database_name, format=None)
Returns the URL to a database page, for example ""/fixtures""
datasette.urls.table(database_name, table_name, format=None)
Returns the URL to a table page, for example ""/fixtures/facetable""
datasette.urls.query(database_name, query_name, format=None)
Returns the URL to a query page, for example ""/fixtures/pragma_cache_size""
These functions can be accessed via the {{ urls }} object in Datasette templates, for example:
HomepageFixtures databasefacetable tablepragma_cache_size query
Use the format=""json"" (or ""csv"" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end.
These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.","[""Internals for plugins"", ""Datasette class""]",[]
internals:internals-internal,internals,internals-internal,Datasette's internal database,"Datasette maintains an ""internal"" SQLite database used for configuration, caching, and storage. Plugins can store configuration, settings, and other data inside this database. By default, Datasette will use a temporary in-memory SQLite database as the internal database, which is created at startup and destroyed at shutdown. Users of Datasette can optionally pass in a --internal flag to specify the path to a SQLite database to use as the internal database, which will persist internal data across Datasette instances.
Datasette maintains tables called catalog_databases , catalog_tables , catalog_columns , catalog_indexes , catalog_foreign_keys with details of the attached databases and their schemas. These tables should not be considered a stable API - they may change between Datasette releases.
The internal database is not exposed in the Datasette application by default, which means private data can safely be stored without worry of accidentally leaking information through the default Datasette interface and API. However, other plugins do have full read and write access to the internal database.
Plugins can access this database by calling internal_db = datasette.get_internal_database() and then executing queries using the Database API .
Plugin authors are asked to practice good etiquette when using the internal database, as all plugins use the same database to store data. For example:
Use a unique prefix when creating tables, indices, and triggers in the internal database. If your plugin is called datasette-xyz , then prefix names with datasette_xyz_* .
Avoid long-running write statements that may stall or block other plugins that are trying to write at the same time.
Use temporary tables or shared in-memory attached databases when possible.
Avoid implementing features that could expose private data stored in the internal database by other plugins.","[""Internals for plugins""]",[]
internals:internals-multiparams,internals,internals-multiparams,The MultiParams class,"request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values.
Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar .
request.args[key] - string
Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[""foo""] would return ""1"" .
request.args.get(key) - string or None
Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(""q"", """") .
request.args.getlist(key) - list of strings
Returns the list of strings for that key. request.args.getlist(""foo"") would return [""1"", ""2""] in the above example. request.args.getlist(""bar"") would return [""3""] . If the key is missing an empty list will be returned.
request.args.keys() - list of strings
Returns the list of available keys - for the example this would be [""foo"", ""bar""] .
key in request.args - True or False
You can use if key in request.args to check if a key is present.
for key in request.args - iterator
This lets you loop through every available key.
len(request.args) - integer
Returns the number of keys.","[""Internals for plugins""]",[]
internals:internals-response,internals,internals-response,Response class,"The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook.
The Response() constructor takes the following arguments:
body - string
The body of the response.
status - integer (optional)
The HTTP status - defaults to 200.
headers - dictionary (optional)
A dictionary of extra HTTP headers, e.g. {""x-hello"": ""world""} .
content_type - string (optional)
The content-type for the response. Defaults to text/plain .
For example:
from datasette.utils.asgi import Response
response = Response(
""This is XML"",
content_type=""application/xml; charset=utf-8"",
)
The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods:
from datasette.utils.asgi import Response
html_response = Response.html(""This is HTML"")
json_response = Response.json({""this_is"": ""json""})
text_response = Response.text(
""This will become utf-8 encoded text""
)
# Redirects are served as 302, unless you pass status=301:
redirect_response = Response.redirect(
""https://latest.datasette.io/""
)
Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively.
Each of the helper methods take optional status= and headers= arguments, documented above.","[""Internals for plugins""]",[]
internals:internals-response-asgi-send,internals,internals-response-asgi-send,Returning a response with .asgi_send(send),"In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook.
Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example:
async def require_authorization(scope, receive, send):
response = Response.text(
""401 Authorization Required"",
headers={
""www-authenticate"": 'Basic realm=""Datasette"", charset=""UTF-8""'
},
status=401,
)
await response.asgi_send(send)","[""Internals for plugins"", ""Response class""]",[]
internals:internals-response-set-cookie,internals,internals-response-set-cookie,Setting cookies with response.set_cookie(),"To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this:
def set_cookie(
self,
key,
value="""",
max_age=None,
expires=None,
path=""/"",
domain=None,
secure=False,
httponly=False,
samesite=""lax"",
): ...
You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication :
response = Response.redirect(""/"")
response.set_cookie(
""ds_actor"",
datasette.sign({""a"": {""id"": ""cleopaws""}}, ""actor""),
)
return response","[""Internals for plugins"", ""Response class""]",[]
internals:internals-shortcuts,internals,internals-shortcuts,Import shortcuts,"The following commonly used symbols can be imported directly from the datasette module:
from datasette import Response
from datasette import Forbidden
from datasette import NotFound
from datasette import hookimpl
from datasette import actor_matches_allow","[""Internals for plugins""]",[]
internals:internals-utils-derive-named-parameters,internals,internals-utils-derive-named-parameters,"derive_named_parameters(db, sql)","Derive the list of named parameters referenced in a SQL query, using an explain query executed against the provided database.
async datasette.utils. derive_named_parameters db : Database sql : str List [ str ]
Given a SQL statement, return a list of named parameters that are used in the statement
e.g. for select * from foo where id=:id this would return [""id""]","[""Internals for plugins"", ""The datasette.utils module""]",[]
internals:internals-utils-parse-metadata,internals,internals-utils-parse-metadata,parse_metadata(content),"This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string.
If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception.
datasette.utils. parse_metadata content : str dict
Detects if content is JSON or YAML and parses it appropriately.","[""Internals for plugins"", ""The datasette.utils module""]",[]
introspection:id1,introspection,id1,Introspection,"Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured.
Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.",[],[]
introspection:jsondataview-actor,introspection,jsondataview-actor,/-/actor,"Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins.
{
""actor"": {
""id"": 1,
""username"": ""some-user""
}
}","[""Introspection""]",[]
introspection:messagesdebugview,introspection,messagesdebugview,/-/messages,"The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.","[""Introspection""]",[]
javascript_plugins:id1,javascript_plugins,id1,JavaScript plugins,"Datasette can run custom JavaScript in several different ways:
Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page
Datasette instances with custom templates can include additional JavaScript in those templates
The extra_js_urls key in datasette.yaml can be used to include extra JavaScript
There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of.
Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance.",[],[]
javascript_plugins:id2,javascript_plugins,id2,JavaScript plugin objects,"JavaScript plugins are blocks of code that can be registered with Datasette using the registerPlugin() method on the datasetteManager object.
The implementation object passed to this method should include a version key defining the plugin version, and one or more of the following named functions providing the implementation of the plugin:","[""JavaScript plugins""]",[]
javascript_plugins:javascript-datasette-init,javascript_plugins,javascript-datasette-init,The datasette_init event,"Datasette emits a custom event called datasette_init when the page is loaded. This event is dispatched on the document object, and includes a detail object with a reference to the datasetteManager object.
Your JavaScript code can listen out for this event using document.addEventListener() like this:
document.addEventListener(""datasette_init"", function (evt) {
const manager = evt.detail;
console.log(""Datasette version:"", manager.VERSION);
});","[""JavaScript plugins""]",[]
javascript_plugins:javascript-datasette-manager,javascript_plugins,javascript-datasette-manager,datasetteManager,"The datasetteManager object
VERSION - string
The version of Datasette
plugins - Map()
A Map of currently loaded plugin names to plugin implementations
registerPlugin(name, implementation)
Call this to register a plugin, passing its name and implementation
selectors - object
An object providing named aliases to useful CSS selectors, listed below","[""JavaScript plugins""]",[]
javascript_plugins:javascript-datasette-manager-selectors,javascript_plugins,javascript-datasette-manager-selectors,Selectors,"These are available on the selectors property of the datasetteManager object.
const DOM_SELECTORS = {
/** Should have one match */
jsonExportLink: "".export-links a[href*=json]"",
/** Event listeners that go outside of the main table, e.g. existing scroll listener */
tableWrapper: "".table-wrapper"",
table: ""table.rows-and-columns"",
aboveTablePanel: "".above-table-panel"",
// These could have multiple matches
/** Used for selecting table headers. Use makeColumnActions if you want to add menu items. */
tableHeaders: `table.rows-and-columns th`,
/** Used to add ""where"" clauses to query using direct manipulation */
filterRows: "".filter-row"",
/** Used to show top available enum values for a column (""facets"") */
facetResults: "".facet-results [data-column]"",
};","[""JavaScript plugins""]",[]
javascript_plugins:javascript-plugins-makeabovetablepanelconfigs,javascript_plugins,javascript-plugins-makeabovetablepanelconfigs,makeAboveTablePanelConfigs(),"This method should return a JavaScript array of objects defining additional panels to be added to the top of the table page. Each object should have the following:
id - string
A unique string ID for the panel, for example map-panel
label - string
A human-readable label for the panel
render(node) - function
A function that will be called with a DOM node to render the panel into
This example shows how a plugin might define a single panel:
document.addEventListener('datasette_init', function(ev) {
ev.detail.registerPlugin('panel-plugin', {
version: 0.1,
makeAboveTablePanelConfigs: () => {
return [
{
id: 'first-panel',
label: 'First panel',
render: node => {
node.innerHTML = '
My custom panel
This is a custom panel that I added using a JavaScript plugin
';
}
}
]
}
});
});
When a page with a table loads, all registered plugins that implement makeAboveTablePanelConfigs() will be called and panels they return will be added to the top of the table page.","[""JavaScript plugins"", ""JavaScript plugin objects""]",[]
javascript_plugins:javascript-plugins-makecolumnactions,javascript_plugins,javascript-plugins-makecolumnactions,makeColumnActions(columnDetails),"This method, if present, will be called when Datasette is rendering the cog action menu icons that appear at the top of the table view. By default these include options like ""Sort ascending/descending"" and ""Facet by this"", but plugins can return additional actions to be included in this menu.
The method will be called with a columnDetails object with the following keys:
columnName - string
The name of the column
columnNotNull - boolean
True if the column is defined as NOT NULL
columnType - string
The SQLite data type of the column
isPk - boolean
True if the column is part of the primary key
It should return a JavaScript array of objects each with a label and onClick property:
label - string
The human-readable label for the action
onClick(evt) - function
A function that will be called when the action is clicked
The evt object passed to the onClick is the standard browser event object that triggered the click.
This example plugin adds two menu items - one to copy the column name to the clipboard and another that displays the column metadata in an alert() window:
document.addEventListener('datasette_init', function(ev) {
ev.detail.registerPlugin('column-name-plugin', {
version: 0.1,
makeColumnActions: (columnDetails) => {
return [
{
label: 'Copy column to clipboard',
onClick: async (evt) => {
await navigator.clipboard.writeText(columnDetails.columnName)
}
},
{
label: 'Alert column metadata',
onClick: () => alert(JSON.stringify(columnDetails, null, 2))
}
];
}
});
});","[""JavaScript plugins"", ""JavaScript plugin objects""]",[]
json_api:column-filter-arguments,json_api,column-filter-arguments,Column filter arguments,"You can filter the data returned by the table based on column values using a query string argument.
?column__exact=value or ?_column=value
Returns rows where the specified column exactly matches the value.
?column__not=value
Returns rows where the column does not match the value.
?column__contains=value
Rows where the string column contains the specified value ( column like ""%value%"" in SQL).
?column__notcontains=value
Rows where the string column does not contain the specified value ( column not like ""%value%"" in SQL).
?column__endswith=value
Rows where the string column ends with the specified value ( column like ""%value"" in SQL).
?column__startswith=value
Rows where the string column starts with the specified value ( column like ""value%"" in SQL).
?column__gt=value
Rows which are greater than the specified value.
?column__gte=value
Rows which are greater than or equal to the specified value.
?column__lt=value
Rows which are less than the specified value.
?column__lte=value
Rows which are less than or equal to the specified value.
?column__like=value
Match rows with a LIKE clause, case insensitive and with % as the wildcard character.
?column__notlike=value
Match rows that do not match the provided LIKE clause.
?column__glob=value
Similar to LIKE but uses Unix wildcard syntax and is case sensitive.
?column__in=value1,value2,value3
Rows where column matches any of the provided values.
You can use a comma separated string, or you can use a JSON array.
The JSON array option is useful if one of your matching values itself contains a comma:
?column__in=[""value"",""value,with,commas""]
?column__notin=value1,value2,value3
Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays.
?column__arraycontains=value
Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value.
This is only available if the json1 SQLite extension is enabled.
?column__arraynotcontains=value
Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value.
This is only available if the json1 SQLite extension is enabled.
?column__date=value
Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 .
?column__isnull=1
Matches rows where the column is null.
?column__notnull=1
Matches rows where the column is not null.
?column__isblank=1
Matches rows where the column is blank, meaning null or the empty string.
?column__notblank=1
Matches rows where the column is not blank.","[""JSON API"", ""Table arguments""]",[]
json_api:expand-foreign-keys,json_api,expand-foreign-keys,Expanding foreign key references,"Datasette can detect foreign key relationships and resolve those references into
labels. The HTML interface does this by default for every detected foreign key
column - you can turn that off using ?_labels=off .
You can request foreign keys be expanded in JSON using the _labels=on or
_label=COLUMN special query string parameters. Here's what an expanded row
looks like:
[
{
""rowid"": 1,
""TreeID"": 141565,
""qLegalStatus"": {
""value"": 1,
""label"": ""Permitted Site""
},
""qSpecies"": {
""value"": 1,
""label"": ""Myoporum laetum :: Myoporum""
},
""qAddress"": ""501X Baker St"",
""SiteOrder"": 1
}
]
The column in the foreign key table that is used for the label can be specified
in metadata.json - see Specifying the label column for a table .","[""JSON API""]",[]
json_api:id1,json_api,id1,JSON API,"Datasette provides a JSON API for your SQLite databases. Anything you can do
through the Datasette user interface can also be accessed as JSON via the API.
To access the API for a page, either click on the .json link on that page or
edit the URL and add a .json extension to it.",[],[]
json_api:id2,json_api,id2,Table arguments,The Datasette table view takes a number of special query string arguments.,"[""JSON API""]",[]
json_api:json-api-cors,json_api,json-api-cors,Enabling CORS,"If you start Datasette with the --cors option, each JSON endpoint will be
served with the following additional HTTP headers:
[[[cog
from datasette.utils import add_cors_headers
import textwrap
headers = {}
add_cors_headers(headers)
output = ""\n"".join(""{}: {}"".format(k, v) for k, v in headers.items())
cog.out(""\n::\n\n"")
cog.out(textwrap.indent(output, ' '))
cog.out(""\n\n"")
]]]
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Authorization, Content-Type
Access-Control-Expose-Headers: Link
Access-Control-Allow-Methods: GET, POST, HEAD, OPTIONS
Access-Control-Max-Age: 3600
[[[end]]]
This allows JavaScript running on any domain to make cross-origin
requests to interact with the Datasette API.
If you start Datasette without the --cors option only JavaScript running on
the same domain as Datasette will be able to access the API.
Here's how to serve data.db with CORS enabled:
datasette data.db --cors","[""JSON API""]",[]
json_api:json-api-default,json_api,json-api-default,Default representation,"The default JSON representation of data from a SQLite table or custom query
looks like this:
{
""ok"": true,
""rows"": [
{
""id"": 3,
""name"": ""Detroit""
},
{
""id"": 2,
""name"": ""Los Angeles""
},
{
""id"": 4,
""name"": ""Memnonia""
},
{
""id"": 1,
""name"": ""San Francisco""
}
],
""truncated"": false
}
""ok"" is always true if an error did not occur.
The ""rows"" key is a list of objects, each one representing a row.
The ""truncated"" key lets you know if the query was truncated. This can happen if a SQL query returns more than 1,000 results (or the max_returned_rows setting).
For table pages, an additional key ""next"" may be present. This indicates that the next page in the pagination set can be retrieved using ?_next=VALUE .","[""JSON API""]",[]
json_api:json-api-discover-alternate,json_api,json-api-discover-alternate,Discovering the JSON for a page,"Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism.
You can find this near the top of the source code of those pages, looking like this:
The JSON URL is also made available in a Link HTTP header for the page:
Link: https://latest.datasette.io/fixtures/sortable.json; rel=""alternate""; type=""application/json+datasette""","[""JSON API""]",[]
json_api:json-api-shapes,json_api,json-api-shapes,Different shapes,"The _shape parameter can be used to access alternative formats for the
rows key which may be more convenient for your application. There are three
options:
?_shape=objects - ""rows"" is a list of JSON key/value objects - the default
?_shape=arrays - ""rows"" is a list of lists, where the order of values in each list matches the order of the columns
?_shape=array - a JSON array of objects - effectively just the ""rows"" key from the default representation
?_shape=array&_nl=on - a newline-separated list of JSON objects
?_shape=arrayfirst - a flat JSON array containing just the first value from each row
?_shape=object - a JSON object keyed using the primary keys of the rows
_shape=arrays looks like this:
{
""ok"": true,
""next"": null,
""rows"": [
[3, ""Detroit""],
[2, ""Los Angeles""],
[4, ""Memnonia""],
[1, ""San Francisco""]
]
}
_shape=array looks like this:
[
{
""id"": 3,
""name"": ""Detroit""
},
{
""id"": 2,
""name"": ""Los Angeles""
},
{
""id"": 4,
""name"": ""Memnonia""
},
{
""id"": 1,
""name"": ""San Francisco""
}
]
_shape=array&_nl=on looks like this:
{""id"": 1, ""value"": ""Myoporum laetum :: Myoporum""}
{""id"": 2, ""value"": ""Metrosideros excelsa :: New Zealand Xmas Tree""}
{""id"": 3, ""value"": ""Pinus radiata :: Monterey Pine""}
_shape=arrayfirst looks like this:
[1, 2, 3]
_shape=object looks like this:
{
""1"": {
""id"": 1,
""value"": ""Myoporum laetum :: Myoporum""
},
""2"": {
""id"": 2,
""value"": ""Metrosideros excelsa :: New Zealand Xmas Tree""
},
""3"": {
""id"": 3,
""value"": ""Pinus radiata :: Monterey Pine""
}
]
The object shape is only available for queries against tables - custom SQL
queries and views do not have an obvious primary key so cannot be returned using
this format.
The object keys are always strings. If your table has a compound primary
key, the object keys will be a comma-separated string.","[""JSON API""]",[]
json_api:json-api-write,json_api,json-api-write,The JSON write API,"Datasette provides a write API for JSON data. This is a POST-only API that requires an authenticated API token, see API Tokens . The token will need to have the specified Permissions .","[""JSON API""]",[]
json_api:rowdeleteview,json_api,rowdeleteview,Deleting a row,"To delete a row, make a POST to //
//-/delete . This requires the delete-row permission.
POST //
//-/delete
Content-Type: application/json
Authorization: Bearer dstok_ here is the tilde-encoded primary key value of the row to delete - or a comma-separated list of primary key values if the table has a composite primary key.
If successful, this will return a 200 status code and a {""ok"": true} response body.
Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.","[""JSON API"", ""The JSON write API""]",[]
json_api:rowupdateview,json_api,rowupdateview,Updating a row,"To update a row, make a POST to //
//-/update . This requires the update-row permission.
POST //
//-/update
Content-Type: application/json
Authorization: Bearer dstok_
{
""update"": {
""text_column"": ""New text string"",
""integer_column"": 3,
""float_column"": 3.14
}
}
here is the tilde-encoded primary key value of the row to update - or a comma-separated list of primary key values if the table has a composite primary key.
You only need to pass the columns you want to update. Any other columns will be left unchanged.
If successful, this will return a 200 status code and a {""ok"": true} response body.
Add ""return"": true to the request body to return the updated row:
{
""update"": {
""title"": ""New title""
},
""return"": true
}
The returned JSON will look like this:
{
""ok"": true,
""row"": {
""id"": 1,
""title"": ""New title"",
""other_column"": ""Will be present here too""
}
}
Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.
Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[]
json_api:tablecreateview,json_api,tablecreateview,Creating a table,"To create a table, make a POST to //-/create . This requires the create-table permission.
POST //-/create
Content-Type: application/json
Authorization: Bearer dstok_
{
""table"": ""name_of_new_table"",
""columns"": [
{
""name"": ""id"",
""type"": ""integer""
},
{
""name"": ""title"",
""type"": ""text""
}
],
""pk"": ""id""
}
The JSON here describes the table that will be created:
table is the name of the table to create. This field is required.
columns is a list of columns to create. Each column is a dictionary with name and type keys.
name is the name of the column. This is required.
type is the type of the column. This is optional - if not provided, text will be assumed. The valid types are text , integer , float and blob .
pk is the primary key for the table. This is optional - if not provided, Datasette will create a SQLite table with a hidden rowid column.
If the primary key is an integer column, it will be configured to automatically increment for each new record.
If you set this to id without including an id column in the list of columns , Datasette will create an auto-incrementing integer ID column for you.
pks can be used instead of pk to create a compound primary key. It should be a JSON list of column names to use in that primary key.
ignore can be set to true to ignore existing rows by primary key if the table already exists.
replace can be set to true to replace existing rows by primary key if the table already exists. This requires the update-row permission.
alter can be set to true if you want to automatically add any missing columns to the table. This requires the alter-table permission.
If the table is successfully created this will return a 201 status code and the following response:
{
""ok"": true,
""database"": ""data"",
""table"": ""name_of_new_table"",
""table_url"": ""http://127.0.0.1:8001/data/name_of_new_table"",
""table_api_url"": ""http://127.0.0.1:8001/data/name_of_new_table.json"",
""schema"": ""CREATE TABLE [name_of_new_table] (\n [id] INTEGER PRIMARY KEY,\n [title] TEXT\n)""
}","[""JSON API"", ""The JSON write API""]",[]
json_api:tablecreateview-example,json_api,tablecreateview-example,Creating a table from example data,"Instead of specifying columns directly you can instead pass a single example row or a list of rows .
Datasette will create a table with a schema that matches those rows and insert them for you:
POST //-/create
Content-Type: application/json
Authorization: Bearer dstok_
{
""table"": ""creatures"",
""rows"": [
{
""id"": 1,
""name"": ""Tarantula""
},
{
""id"": 2,
""name"": ""Kākāpō""
}
],
""pk"": ""id""
}
Doing this requires both the create-table and insert-row permissions.
The 201 response here will be similar to the columns form, but will also include the number of rows that were inserted as row_count :
{
""ok"": true,
""database"": ""data"",
""table"": ""creatures"",
""table_url"": ""http://127.0.0.1:8001/data/creatures"",
""table_api_url"": ""http://127.0.0.1:8001/data/creatures.json"",
""schema"": ""CREATE TABLE [creatures] (\n [id] INTEGER PRIMARY KEY,\n [name] TEXT\n)"",
""row_count"": 2
}
You can call the create endpoint multiple times for the same table provided you are specifying the table using the rows or row option. New rows will be inserted into the table each time. This means you can use this API if you are unsure if the relevant table has been created yet.
If you pass a row to the create endpoint with a primary key that already exists you will get an error that looks like this:
{
""ok"": false,
""errors"": [
""UNIQUE constraint failed: creatures.id""
]
}
You can avoid this error by passing the same ""ignore"": true or ""replace"": true options to the create endpoint as you can to the insert endpoint .
To use the ""replace"": true option you will also need the update-row permission.
Pass ""alter"": true to automatically add any missing columns to the existing table that are present in the rows you are submitting. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[]
json_api:tabledropview,json_api,tabledropview,Dropping tables,"To drop a table, make a POST to //
/-/drop . This requires the drop-table permission.
POST //
/-/drop
Content-Type: application/json
Authorization: Bearer dstok_
Without a POST body this will return a status 200 with a note about how many rows will be deleted:
{
""ok"": true,
""database"": """",
""table"": ""
"",
""row_count"": 5,
""message"": ""Pass \""confirm\"": true to confirm""
}
If you pass the following POST body:
{
""confirm"": true
}
Then the table will be dropped and a status 200 response of {""ok"": true} will be returned.
Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.","[""JSON API"", ""The JSON write API""]",[]
json_api:tableinsertview,json_api,tableinsertview,Inserting rows,"This requires the insert-row permission.
A single row can be inserted using the ""row"" key:
POST //
/-/insert
Content-Type: application/json
Authorization: Bearer dstok_
{
""row"": {
""column1"": ""value1"",
""column2"": ""value2""
}
}
If successful, this will return a 201 status code and the newly inserted row, for example:
{
""rows"": [
{
""id"": 1,
""column1"": ""value1"",
""column2"": ""value2""
}
]
}
To insert multiple rows at a time, use the same API method but send a list of dictionaries as the ""rows"" key:
POST //
/-/insert
Content-Type: application/json
Authorization: Bearer dstok_
{
""rows"": [
{
""column1"": ""value1"",
""column2"": ""value2""
},
{
""column1"": ""value3"",
""column2"": ""value4""
}
]
}
If successful, this will return a 201 status code and a {""ok"": true} response body.
The maximum number rows that can be submitted at once defaults to 100, but this can be changed using the max_insert_rows setting.
To return the newly inserted rows, add the ""return"": true key to the request body:
{
""rows"": [
{
""column1"": ""value1"",
""column2"": ""value2""
},
{
""column1"": ""value3"",
""column2"": ""value4""
}
],
""return"": true
}
This will return the same ""rows"" key as the single row example above. There is a small performance penalty for using this option.
If any of your rows have a primary key that is already in use, you will get an error and none of the rows will be inserted:
{
""ok"": false,
""errors"": [
""UNIQUE constraint failed: new_table.id""
]
}
Pass ""ignore"": true to ignore these errors and insert the other rows:
{
""rows"": [
{
""id"": 1,
""column1"": ""value1"",
""column2"": ""value2""
},
{
""id"": 2,
""column1"": ""value3"",
""column2"": ""value4""
}
],
""ignore"": true
}
Or you can pass ""replace"": true to replace any rows with conflicting primary keys with the new values. This requires the update-row permission.
Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[]
json_api:tableupsertview,json_api,tableupsertview,Upserting rows,"An upsert is an insert or update operation. If a row with a matching primary key already exists it will be updated - otherwise a new row will be inserted.
The upsert API is mostly the same shape as the insert API . It requires both the insert-row and update-row permissions.
POST //
/-/upsert
Content-Type: application/json
Authorization: Bearer dstok_
{
""rows"": [
{
""id"": 1,
""title"": ""Updated title for 1"",
""description"": ""Updated description for 1""
},
{
""id"": 2,
""description"": ""Updated description for 2"",
},
{
""id"": 3,
""title"": ""Item 3"",
""description"": ""Description for 3""
}
]
}
Imagine a table with a primary key of id and which already has rows with id values of 1 and 2 .
The above example will:
Update the row with id of 1 to set both title and description to the new values
Update the row with id of 2 to set title to the new value - description will be left unchanged
Insert a new row with id of 3 and both title and description set to the new values
Similar to /-/insert , a row key with an object can be used instead of a rows array to upsert a single row.
If successful, this will return a 200 status code and a {""ok"": true} response body.
Add ""return"": true to the request body to return full copies of the affected rows after they have been inserted or updated:
{
""rows"": [
{
""id"": 1,
""title"": ""Updated title for 1"",
""description"": ""Updated description for 1""
},
{
""id"": 2,
""description"": ""Updated description for 2"",
},
{
""id"": 3,
""title"": ""Item 3"",
""description"": ""Description for 3""
}
],
""return"": true
}
This will return the following:
{
""ok"": true,
""rows"": [
{
""id"": 1,
""title"": ""Updated title for 1"",
""description"": ""Updated description for 1""
},
{
""id"": 2,
""title"": ""Item 2"",
""description"": ""Updated description for 2""
},
{
""id"": 3,
""title"": ""Item 3"",
""description"": ""Description for 3""
}
]
}
When using upsert you must provide the primary key column (or columns if the table has a compound primary key) for every row, or you will get a 400 error:
{
""ok"": false,
""errors"": [
""Row 0 is missing primary key column(s): \""id\""""
]
}
If your table does not have an explicit primary key you should pass the SQLite rowid key instead.
Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[]
metadata:database-level-metadata,metadata,database-level-metadata,Database-level metadata,"""Database-level"" metadata refers to fields that can be specified for each database in a Datasette instance. These attributes should be listed under a database inside the ""databases"" field.
The following are the full list of allowed database-level metadata fields:
source
source_url
license
license_url
about
about_url","[""Metadata"", ""Metadata reference""]",[]
metadata:id1,metadata,id1,Metadata,"Data loves metadata. Any time you run Datasette you can optionally include a
YAML or JSON file with metadata about your databases and tables. Datasette will then
display that information in the web UI.
Run Datasette like this:
datasette database1.db database2.db --metadata metadata.yaml
Your metadata.yaml file can look something like this:
[[[cog
from metadata_doc import metadata_example
metadata_example(cog, {
""title"": ""Custom title for your index page"",
""description"": ""Some description text can go here"",
""license"": ""ODbL"",
""license_url"": ""https://opendatacommons.org/licenses/odbl/"",
""source"": ""Original Data Source"",
""source_url"": ""http://example.com/""
})
]]]
[[[end]]]
Choosing YAML over JSON adds support for multi-line strings and comments.
The above metadata will be displayed on the index page of your Datasette-powered
site. The source and license information will also be included in the footer of
every page served by Datasette.
Any special HTML characters in description will be escaped. If you want to
include HTML in your description, you can use a description_html property
instead.",[],[]
metadata:id2,metadata,id2,Metadata reference,A full reference of every supported option in a metadata.json or metadata.yaml file.,"[""Metadata""]",[]
metadata:label-columns,metadata,label-columns,Specifying the label column for a table,"Datasette's HTML interface attempts to display foreign key references as
labelled hyperlinks. By default, it looks for referenced tables that only have
two columns: a primary key column and one other. It assumes that the second
column should be used as the link label.
If your table has more than two columns you can specify which column should be
used for the link label with the label_column property:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""label_column"": ""title""
}
}
}
}
})
]]]
[[[end]]]","[""Metadata""]",[]
metadata:metadata-default-sort,metadata,metadata-default-sort,Setting a default sort order,"By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the ""sort"" or ""sort_desc"" metadata properties:
[[[cog
metadata_example(cog, {
""databases"": {
""mydatabase"": {
""tables"": {
""example_table"": {
""sort"": ""created""
}
}
}
}
})
]]]
[[[end]]]
Or use ""sort_desc"" to sort in descending order:
[[[cog
metadata_example(cog, {
""databases"": {
""mydatabase"": {
""tables"": {
""example_table"": {
""sort_desc"": ""created""
}
}
}
}
})
]]]
[[[end]]]","[""Metadata""]",[]
metadata:metadata-hiding-tables,metadata,metadata-hiding-tables,Hiding tables,"You can hide tables from the database listing view (in the same way that FTS and
SpatiaLite tables are automatically hidden) using ""hidden"": true :
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""hidden"": True
}
}
}
}
})
]]]
[[[end]]]","[""Metadata""]",[]
metadata:metadata-page-size,metadata,metadata-page-size,Setting a custom page size,"Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the ""size"" key in metadata.json :
[[[cog
metadata_example(cog, {
""databases"": {
""mydatabase"": {
""tables"": {
""example_table"": {
""size"": 10
}
}
}
}
})
]]]
[[[end]]]
This size can still be over-ridden by passing e.g. ?_size=50 in the query string.","[""Metadata""]",[]
metadata:metadata-sortable-columns,metadata,metadata-sortable-columns,Setting which columns can be used for sorting,"Datasette allows any column to be used for sorting by default. If you need to
control which columns are available for sorting you can do so using the optional
sortable_columns key:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""sortable_columns"": [
""height"",
""weight""
]
}
}
}
}
})
]]]
[[[end]]]
This will restrict sorting of example_table to just the height and
weight columns.
You can also disable sorting entirely by setting ""sortable_columns"": []
You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so:
[[[cog
metadata_example(cog, {
""databases"": {
""my_database"": {
""tables"": {
""name_of_view"": {
""sortable_columns"": [
""clicks"",
""impressions""
]
}
}
}
}
})
]]]
[[[end]]]","[""Metadata""]",[]
metadata:metadata-source-license-about,metadata,metadata-source-license-about,"Source, license and about","The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional.
source and source_url should be used to indicate where the underlying data came from.
license and license_url should be used to indicate the license under which the data can be used.
about and about_url can be used to link to further information about the project - an accompanying blog entry for example.
For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.","[""Metadata""]",[]
metadata:per-database-and-per-table-metadata,metadata,per-database-and-per-table-metadata,Per-database and per-table metadata,"Metadata at the top level of the file will be shown on the index page and in the
footer on every page of the site. The license and source is expected to apply to
all of your data.
You can also provide metadata at the per-database or per-table level, like this:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""source"": ""Alternative source"",
""source_url"": ""http://example.com/"",
""tables"": {
""example_table"": {
""description_html"": ""Custom table description"",
""license"": ""CC BY 3.0 US"",
""license_url"": ""https://creativecommons.org/licenses/by/3.0/us/""
}
}
}
}
})
]]]
[[[end]]]
Each of the top-level metadata fields can be used at the database and table level.","[""Metadata""]",[]
metadata:table-level-metadata,metadata,table-level-metadata,Table-level metadata,"""Table-level"" metadata refers to fields that can be specified for each table in a Datasette instance. These attributes should be listed under a specific table using the ""tables"" field.
The following are the full list of allowed table-level metadata fields:
source
source_url
license
license_url
about
about_url
hidden
sort/sort_desc
size
sortable_columns
label_column
facets
fts_table
fts_pk
searchmode
columns","[""Metadata"", ""Metadata reference""]",[]
metadata:top-level-metadata,metadata,top-level-metadata,Top-level metadata,"""Top-level"" metadata refers to fields that can be specified at the root level of a metadata file. These attributes are meant to describe the entire Datasette instance.
The following are the full list of allowed top-level metadata fields:
title
description
description_html
license
license_url
source
source_url","[""Metadata"", ""Metadata reference""]",[]
pages:databaseview-hidden,pages,databaseview-hidden,Hidden tables,"Some tables listed on the database page are treated as hidden. Hidden tables are not completely invisible - they can be accessed through the ""hidden tables"" link at the bottom of the page. They are hidden because they represent low-level implementation details which are generally not useful to end-users of Datasette.
The following tables are hidden by default:
Any table with a name that starts with an underscore - this is a Datasette convention to help plugins easily hide their own internal tables.
Tables that have been configured as ""hidden"": true using Hiding tables .
*_fts tables that implement SQLite full-text search indexes.
Tables relating to the inner workings of the SpatiaLite SQLite extension.
sqlite_stat tables used to store statistics used by the query optimizer.","[""Pages and API endpoints"", ""Database""]",[]
pages:pages,pages,pages,Pages and API endpoints,"The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.",[],[]
performance:performance,performance,performance,Performance and caching,"Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes.
That said, there are a number of tricks you can use to improve Datasette's performance.",[],[]
performance:performance-immutable-mode,performance,performance-immutable-mode,Immutable mode,"If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode .
Doing so will disable all locking and change detection, which can result in improved query performance.
This also enables further optimizations relating to HTTP caching, described below.
To open a file in immutable mode pass it to the datasette command using the -i option:
datasette -i data.db
When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance.","[""Performance and caching""]",[]
performance:performance-inspect,performance,performance-inspect,"Using ""datasette inspect""","Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more.
If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server.
To create a JSON file containing the calculated row counts for a database, use the following:
datasette inspect data.db --inspect-file=counts.json
Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup:
datasette -i data.db --inspect-file=counts.json
You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored.
You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.","[""Performance and caching""]",[]
plugin_hooks:plugin-actions,plugin_hooks,plugin-actions,Action hooks,"Action hooks can be used to add items to the action menus that appear at the top of different pages within Datasette. Unlike menu_links() , actions which are displayed on every page, actions should only be relevant to the page the user is currently viewing.
Each of these hooks should return return a list of {""href"": ""..."", ""label"": ""...""} menu items, with optional ""description"": ""..."" keys describing each action in more detail.
They can alternatively return an async def awaitable function which, when called, returns a list of those menu items.","[""Plugin hooks""]",[]
plugin_hooks:plugin-event-tracking,plugin_hooks,plugin-event-tracking,Event tracking,"Datasette includes an internal mechanism for tracking notable events. This can be used for analytics, but can also be used by plugins that want to listen out for when key events occur (such as a table being created) and take action in response.
Plugins can register to receive events using the track_event plugin hook.
They can also define their own events for other plugins to receive using the register_events() plugin hook , combined with calls to the datasette.track_event() internal method .","[""Plugin hooks""]",[]
plugin_hooks:plugin-hook-forbidden,plugin_hooks,plugin-hook-forbidden,"forbidden(datasette, request, message)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to render templates or execute SQL queries.
request - Request object
The current HTTP request.
message - string
A message hinting at why the request was forbidden.
Plugins can use this to customize how Datasette responds when a 403 Forbidden error occurs - usually because a page failed a permission check, see Permissions .
If a plugin hook wishes to react to the error, it should return a Response object .
This example returns a redirect to a /-/login page:
from datasette import hookimpl
from urllib.parse import urlencode
@hookimpl
def forbidden(request, message):
return Response.redirect(
""/-/login?="" + urlencode({""message"": message})
)
The function can alternatively return an awaitable function if it needs to make any asynchronous method calls. This example renders a template:
from datasette import hookimpl, Response
@hookimpl
def forbidden(datasette):
async def inner():
return Response.html(
await datasette.render_template(
""render_message.html"", request=request
)
)
return inner","[""Plugin hooks""]",[]
plugin_hooks:plugin-hook-homepage-actions,plugin_hooks,plugin-hook-homepage-actions,"homepage_actions(datasette, actor, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
request - Request object
The current HTTP request.
Populates an actions menu on the top-level index homepage of the Datasette instance.
This example adds a link an imagined tool for editing the homepage, only for signed in users:
from datasette import hookimpl
@hookimpl
def homepage_actions(datasette, actor):
if actor:
return [
{
""href"": datasette.urls.path(
""/-/customize-homepage""
),
""label"": ""Customize homepage"",
}
]","[""Plugin hooks"", ""Action hooks""]",[]
plugin_hooks:plugin-hook-register-events,plugin_hooks,plugin-hook-register-events,register_events(datasette),"datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
This hook should return a list of Event subclasses that represent custom events that the plugin might send to the datasette.track_event() method.
This example registers event subclasses for ban-user and unban-user events:
from dataclasses import dataclass
from datasette import hookimpl, Event
@dataclass
class BanUserEvent(Event):
name = ""ban-user""
user: dict
@dataclass
class UnbanUserEvent(Event):
name = ""unban-user""
user: dict
@hookimpl
def register_events():
return [BanUserEvent, UnbanUserEvent]
The plugin can then call datasette.track_event(...) to send a ban-user event:
await datasette.track_event(
BanUserEvent(user={""id"": 1, ""username"": ""cleverbot""})
)","[""Plugin hooks"", ""Event tracking""]",[]
plugin_hooks:plugin-hook-register-magic-parameters,plugin_hooks,plugin-hook-register-magic-parameters,register_magic_parameters(datasette),"datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
Magic parameters can be used to add automatic parameters to canned queries . This plugin hook allows additional magic parameters to be defined by plugins.
Magic parameters all take this format: _prefix_rest_of_parameter . The prefix indicates which magic parameter function should be called - the rest of the parameter is passed as an argument to that function.
To register a new function, return it as a tuple of (string prefix, function) from this hook. The function you register should take two arguments: key and request , where key is the rest_of_parameter portion of the parameter and request is the current Request object .
This example registers two new magic parameters: :_request_http_version returning the HTTP version of the current request, and :_uuid_new which returns a new UUID:
from datasette import hookimpl
from uuid import uuid4
def uuid(key, request):
if key == ""new"":
return str(uuid4())
else:
raise KeyError
def request(key, request):
if key == ""http_version"":
return request.scope[""http_version""]
else:
raise KeyError
@hookimpl
def register_magic_parameters(datasette):
return [
(""request"", request),
(""uuid"", uuid),
]","[""Plugin hooks""]",[]
plugin_hooks:plugin-hook-top-canned-query,plugin_hooks,plugin-hook-top-canned-query,"top_canned_query(datasette, request, database, query_name)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
query_name - string
The name of the canned query.
Returns HTML to be displayed at the top of the canned query page.","[""Plugin hooks"", ""Template slots""]",[]
plugin_hooks:plugin-hook-top-database,plugin_hooks,plugin-hook-top-database,"top_database(datasette, request, database)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
Returns HTML to be displayed at the top of the database page.","[""Plugin hooks"", ""Template slots""]",[]
plugin_hooks:plugin-hook-top-homepage,plugin_hooks,plugin-hook-top-homepage,"top_homepage(datasette, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
Returns HTML to be displayed at the top of the Datasette homepage.","[""Plugin hooks"", ""Template slots""]",[]
plugin_hooks:plugin-hook-top-query,plugin_hooks,plugin-hook-top-query,"top_query(datasette, request, database, sql)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
sql - string
The SQL query.
Returns HTML to be displayed at the top of the query results page.","[""Plugin hooks"", ""Template slots""]",[]
plugin_hooks:plugin-hook-top-row,plugin_hooks,plugin-hook-top-row,"top_row(datasette, request, database, table, row)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
row - sqlite.Row
The SQLite row object being displayed.
Returns HTML to be displayed at the top of the row page.","[""Plugin hooks"", ""Template slots""]",[]
plugin_hooks:plugin-hook-top-table,plugin_hooks,plugin-hook-top-table,"top_table(datasette, request, database, table)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
Returns HTML to be displayed at the top of the table page.","[""Plugin hooks"", ""Template slots""]",[]
plugin_hooks:plugin-hook-view-actions,plugin_hooks,plugin-hook-view-actions,"view_actions(datasette, actor, database, view, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
view - string
The name of the SQL view.
request - Request object or None
The current HTTP request. This can be None if the request object is not available.
Like table_actions(datasette, actor, database, table, request) but for SQL views.","[""Plugin hooks"", ""Action hooks""]",[]
plugin_hooks:plugin-page-extras,plugin_hooks,plugin-page-extras,Page extras,These plugin hooks can be used to affect the way HTML pages for different Datasette interfaces are rendered.,"[""Plugin hooks""]",[]
plugin_hooks:plugin-register-permissions,plugin_hooks,plugin-register-permissions,register_permissions(datasette),"If your plugin needs to register additional permissions unique to that plugin - upload-csvs for example - you can return a list of those permissions from this hook.
from datasette import hookimpl, Permission
@hookimpl
def register_permissions(datasette):
return [
Permission(
name=""upload-csvs"",
abbr=None,
description=""Upload CSV files"",
takes_database=True,
takes_resource=False,
default=False,
)
]
The fields of the Permission class are as follows:
name - string
The name of the permission, e.g. upload-csvs . This should be unique across all plugins that the user might have installed, so choose carefully.
abbr - string or None
An abbreviation of the permission, e.g. uc . This is optional - you can set it to None if you do not want to pick an abbreviation. Since this needs to be unique across all installed plugins it's best not to specify an abbreviation at all. If an abbreviation is provided it will be used when creating restricted signed API tokens.
description - string or None
A human-readable description of what the permission lets you do. Should make sense as the second part of a sentence that starts ""A user with this permission can ..."".
takes_database - boolean
True if this permission can be granted on a per-database basis, False if it is only valid at the overall Datasette instance level.
takes_resource - boolean
True if this permission can be granted on a per-resource basis. A resource is a database table, SQL view or canned query .
default - boolean
The default value for this permission if it is not explicitly granted to a user. True means the permission is granted by default, False means it is not.
This should only be True if you want anonymous users to be able to take this action.","[""Plugin hooks""]",[]
plugins:deploying-plugins-using-datasette-publish,plugins,deploying-plugins-using-datasette-publish,Deploying plugins using datasette publish,"The datasette publish and datasette package commands both take an optional --install argument. You can use this one or more times to tell Datasette to pip install specific plugins as part of the process:
datasette publish cloudrun mydb.db --install=datasette-vega
You can use the name of a package on PyPI or any of the other valid arguments to pip install such as a URL to a .zip file:
datasette publish cloudrun mydb.db \
--install=https://url-to-my-package.zip","[""Plugins"", ""Installing plugins""]",[]
plugins:one-off-plugins-using-plugins-dir,plugins,one-off-plugins-using-plugins-dir,One-off plugins using --plugins-dir,"You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option:
datasette mydb.db --plugins-dir=plugins/","[""Plugins"", ""Installing plugins""]",[]
plugins:plugins-configuration,plugins,plugins-configuration,Plugin configuration,"Plugins can have their own configuration, embedded in a configuration file . Configuration options for plugins live within a ""plugins"" key in that file, which can be included at the root, database or table level.
Here is an example of some plugin configuration for a specific table:
[[[cog
from metadata_doc import config_example
config_example(cog, {
""databases"": {
""sf-trees"": {
""tables"": {
""Street_Tree_List"": {
""plugins"": {
""datasette-cluster-map"": {
""latitude_column"": ""lat"",
""longitude_column"": ""lng""
}
}
}
}
}
}
})
]]]
[[[end]]]
This tells the datasette-cluster-map column which latitude and longitude columns should be used for a table called Street_Tree_List inside a database file called sf-trees.db .","[""Plugins""]",[]
plugins:plugins-configuration-secret,plugins,plugins-configuration-secret,Secret configuration values,"Some plugins may need configuration that should stay secret - API keys for example. There are two ways in which you can store secret configuration values.
As environment variables . If your secret lives in an environment variable that is available to the Datasette process, you can indicate that the configuration value should be read from that environment variable like so:
[[[cog
config_example(cog, {
""plugins"": {
""datasette-auth-github"": {
""client_secret"": {
""$env"": ""GITHUB_CLIENT_SECRET""
}
}
}
})
]]]
[[[end]]]
As values in separate files . Your secrets can also live in files on disk. To specify a secret should be read from a file, provide the full file path like this:
[[[cog
config_example(cog, {
""plugins"": {
""datasette-auth-github"": {
""client_secret"": {
""$file"": ""/secrets/client-secret""
}
}
}
})
]]]
[[[end]]]
If you are publishing your data using the datasette publish family of commands, you can use the --plugin-secret option to set these secrets at publish time. For example, using Heroku you might run the following command:
datasette publish heroku my_database.db \
--name my-heroku-app-demo \
--install=datasette-auth-github \
--plugin-secret datasette-auth-github client_id your_client_id \
--plugin-secret datasette-auth-github client_secret your_client_secret
This will set the necessary environment variables and add the following to the deployed metadata.yaml :
[[[cog
config_example(cog, {
""plugins"": {
""datasette-auth-github"": {
""client_id"": {
""$env"": ""DATASETTE_AUTH_GITHUB_CLIENT_ID""
},
""client_secret"": {
""$env"": ""DATASETTE_AUTH_GITHUB_CLIENT_SECRET""
}
}
}
})
]]]
[[[end]]]","[""Plugins"", ""Plugin configuration""]",[]
plugins:plugins-datasette-load-plugins,plugins,plugins-datasette-load-plugins,Controlling which plugins are loaded,"Datasette defaults to loading every plugin that is installed in the same virtual environment as Datasette itself.
You can set the DATASETTE_LOAD_PLUGINS environment variable to a comma-separated list of plugin names to load a controlled subset of plugins instead.
For example, to load just the datasette-vega and datasette-cluster-map plugins, set DATASETTE_LOAD_PLUGINS to datasette-vega,datasette-cluster-map :
export DATASETTE_LOAD_PLUGINS='datasette-vega,datasette-cluster-map'
datasette mydb.db
Or:
DATASETTE_LOAD_PLUGINS='datasette-vega,datasette-cluster-map' \
datasette mydb.db
To disable the loading of all additional plugins, set DATASETTE_LOAD_PLUGINS to an empty string:
export DATASETTE_LOAD_PLUGINS=''
datasette mydb.db
A quick way to test this setting is to use it with the datasette plugins command:
DATASETTE_LOAD_PLUGINS='datasette-vega' datasette plugins
This should output the following:
[
{
""name"": ""datasette-vega"",
""static"": true,
""templates"": false,
""version"": ""0.6.2"",
""hooks"": [
""extra_css_urls"",
""extra_js_urls""
]
}
]","[""Plugins""]",[]
plugins:plugins-installing,plugins,plugins-installing,Installing plugins,"If a plugin has been packaged for distribution using setuptools you can use the plugin by installing it alongside Datasette in the same virtual environment or Docker container.
You can install plugins using the datasette install command:
datasette install datasette-vega
You can uninstall plugins with datasette uninstall :
datasette uninstall datasette-vega
You can upgrade plugins with datasette install --upgrade or datasette install -U :
datasette install -U datasette-vega
This command can also be used to upgrade Datasette itself to the latest released version:
datasette install -U datasette
You can install multiple plugins at once by listing them as lines in a requirements.txt file like this:
datasette-vega
datasette-cluster-map
Then pass that file to datasette install -r :
datasette install -r requirements.txt
The install and uninstall commands are thin wrappers around pip install and pip uninstall , which ensure that they run pip in the same virtual environment as Datasette itself.","[""Plugins""]",[]
publish:publishing,publish,publishing,Publishing data,Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.,[],[]
settings:config-dir,settings,config-dir,Configuration directory mode,"Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose:
datasette one.db two.db \
--metadata=metadata.json \
--template-dir=templates/ \
--plugins-dir=plugins \
--static css:css
As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure:
# In a directory called my-app:
my-app/one.db
my-app/two.db
my-app/datasette.yaml
my-app/metadata.json
my-app/templates/index.html
my-app/plugins/my_plugin.py
my-app/static/my.css
Now start Datasette by providing the path to that directory:
datasette my-app/
Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db files that it finds, will load metadata.json if it exists, and will load the templates , plugins and static folders if they are present.
The files that can be included in this directory are as follows. All are optional.
*.db (or *.sqlite3 or *.sqlite ) - SQLite database files that will be served by Datasette
datasette.yaml - Configuration for the Datasette instance
metadata.json - Metadata for those databases - metadata.yaml or metadata.yml can be used as well
inspect-data.json - the result of running datasette inspect *.db --inspect-file=inspect-data.json from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running
templates/ - a directory containing Custom templates
plugins/ - a directory containing plugins, see Writing one-off plugins
static/ - a directory containing static files - these will be served from /static/filename.txt , see Serving static files","[""Settings""]",[]
settings:id1,settings,id1,Settings,,[],[]
settings:id2,settings,id2,Settings,"The following options can be set using --setting name value , or by storing them in the settings.json file for use with Configuration directory mode .","[""Settings""]",[]
settings:setting-allow-csv-stream,settings,setting-allow-csv-stream,allow_csv_stream,"Enables the CSV export feature where an entire table
(potentially hundreds of thousands of rows) can be exported as a single CSV
file. This is turned on by default - you can turn it off like this:
datasette mydatabase.db --setting allow_csv_stream off","[""Settings"", ""Settings""]",[]
settings:setting-allow-download,settings,setting-allow-download,allow_download,"Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download is on. To disable database downloads, use the following:
datasette mydatabase.db --setting allow_download off","[""Settings"", ""Settings""]",[]
settings:setting-allow-facet,settings,setting-allow-facet,allow_facet,"Allow users to specify columns they would like to facet on using the ?_facet=COLNAME URL parameter to the table view.
This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in metadata.json configuration for the table.
Here's how to disable this feature:
datasette mydatabase.db --setting allow_facet off","[""Settings"", ""Settings""]",[]
settings:setting-allow-signed-tokens,settings,setting-allow-signed-tokens,allow_signed_tokens,"Should users be able to create signed API tokens to access Datasette?
This is turned on by default. Use the following to turn it off:
datasette mydatabase.db --setting allow_signed_tokens off
Turning this setting off will disable the /-/create-token page, described here . It will also cause any incoming Authorization: Bearer dstok_... API tokens to be ignored.","[""Settings"", ""Settings""]",[]
settings:setting-base-url,settings,setting-base-url,base_url,"If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance.
For example, if you are sending traffic from https://www.example.com/tools/datasette/ through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/ as its root URL.
You can do that like so:
datasette mydatabase.db --setting base_url /tools/datasette/","[""Settings"", ""Settings""]",[]
settings:setting-default-allow-sql,settings,setting-default-allow-sql,default_allow_sql,"Should users be able to execute arbitrary SQL queries by default?
Setting this to off causes permission checks for execute-sql to fail by default.
datasette mydatabase.db --setting default_allow_sql off
Another way to achieve this is to add ""allow_sql"": false to your datasette.yaml file, as described in Controlling the ability to execute arbitrary SQL . This setting offers a more convenient way to do this.","[""Settings"", ""Settings""]",[]
settings:setting-default-cache-ttl,settings,setting-default-cache-ttl,default_cache_ttl,"Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X . Can be over-ridden on a per-request basis using the ?_ttl= query string parameter. Set this to 0 to disable HTTP caching entirely. Defaults to 5 seconds.
datasette mydatabase.db --setting default_cache_ttl 60","[""Settings"", ""Settings""]",[]
settings:setting-default-facet-size,settings,setting-default-facet-size,default_facet_size,"The default number of unique rows returned by Facets is 30. You can customize it like this:
datasette mydatabase.db --setting default_facet_size 50","[""Settings"", ""Settings""]",[]
settings:setting-default-page-size,settings,setting-default-page-size,default_page_size,"The default number of rows returned by the table page. You can over-ride this on a per-page basis using the ?_size=80 query string parameter, provided you do not specify a value higher than the max_returned_rows setting. You can set this default using --setting like so:
datasette mydatabase.db --setting default_page_size 50","[""Settings"", ""Settings""]",[]
settings:setting-facet-suggest-time-limit-ms,settings,setting-facet-suggest-time-limit-ms,facet_suggest_time_limit_ms,"When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet.
You can increase this time limit like so:
datasette mydatabase.db --setting facet_suggest_time_limit_ms 500","[""Settings"", ""Settings""]",[]
settings:setting-facet-time-limit-ms,settings,setting-facet-time-limit-ms,facet_time_limit_ms,"This is the time limit Datasette allows for calculating a facet, which defaults to 200ms:
datasette mydatabase.db --setting facet_time_limit_ms 1000","[""Settings"", ""Settings""]",[]
settings:setting-force-https-urls,settings,setting-force-https-urls,force_https_urls,"Forces self-referential URLs in the JSON output to always use the https://
protocol. This is useful for cases where the application itself is hosted using
HTTP but is served to the outside world via a proxy that enables HTTPS.
datasette mydatabase.db --setting force_https_urls 1","[""Settings"", ""Settings""]",[]
settings:setting-max-csv-mb,settings,setting-max-csv-mb,max_csv_mb,"The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB.
You can disable the limit entirely by settings this to 0:
datasette mydatabase.db --setting max_csv_mb 0","[""Settings"", ""Settings""]",[]
settings:setting-max-insert-rows,settings,setting-max-insert-rows,max_insert_rows,"Maximum rows that can be inserted at a time using the bulk insert API, see Inserting rows . Defaults to 100.
You can increase or decrease this limit like so:
datasette mydatabase.db --setting max_insert_rows 1000","[""Settings"", ""Settings""]",[]
settings:setting-max-returned-rows,settings,setting-max-returned-rows,max_returned_rows,"Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows.
You can increase or decrease this limit like so:
datasette mydatabase.db --setting max_returned_rows 2000","[""Settings"", ""Settings""]",[]
settings:setting-max-signed-tokens-ttl,settings,setting-max-signed-tokens-ttl,max_signed_tokens_ttl,"Maximum allowed expiry time for signed API tokens created by users.
Defaults to 0 which means no limit - tokens can be created that will never expire.
Set this to a value in seconds to limit the maximum expiry time. For example, to set that limit to 24 hours you would use:
datasette mydatabase.db --setting max_signed_tokens_ttl 86400
This setting is enforced when incoming tokens are processed.","[""Settings"", ""Settings""]",[]
settings:setting-publish-secrets,settings,setting-publish-secrets,Using secrets with datasette publish,"The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed.
This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy.
You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option:
datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5","[""Settings""]",[]
settings:setting-secret,settings,setting-secret,Configuring the secret,"Datasette uses a secret string to sign secure values such as cookies.
If you do not provide a secret, Datasette will create one when it starts up. This secret will reset every time the Datasette server restarts though, so things like authentication cookies and API tokens will not stay valid between restarts.
You can pass a secret to Datasette in two ways: with the --secret command-line option or by setting a DATASETTE_SECRET environment variable.
datasette mydb.db --secret=SECRET_VALUE_HERE
Or:
export DATASETTE_SECRET=SECRET_VALUE_HERE
datasette mydb.db
One way to generate a secure random secret is to use Python like this:
python3 -c 'import secrets; print(secrets.token_hex(32))'
cdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52
Plugin authors make use of this signing mechanism in their plugins using .sign(value, namespace=""default"") and .unsign(value, namespace=""default"") .","[""Settings""]",[]
settings:setting-sql-time-limit-ms,settings,setting-sql-time-limit-ms,sql_time_limit_ms,"By default, queries have a time limit of one second. If a query takes longer than this to run Datasette will terminate the query and return an error.
If this time limit is too short for you, you can customize it using the sql_time_limit_ms limit - for example, to increase it to 3.5 seconds:
datasette mydatabase.db --setting sql_time_limit_ms 3500
You can optionally set a lower time limit for an individual query using the ?_timelimit=100 query string argument:
/my-database/my-table?qSpecies=44&_timelimit=100
This would set the time limit to 100ms for that specific query. This feature is useful if you are working with databases of unknown size and complexity - a query that might make perfect sense for a smaller table could take too long to execute on a table with millions of rows. By setting custom time limits you can execute queries ""optimistically"" - e.g. give me an exact count of rows matching this query but only if it takes less than 100ms to calculate.","[""Settings"", ""Settings""]",[]
settings:setting-suggest-facets,settings,setting-suggest-facets,suggest_facets,"Should Datasette calculate suggested facets? On by default, turn this off like so:
datasette mydatabase.db --setting suggest_facets off","[""Settings"", ""Settings""]",[]
settings:setting-truncate-cells-html,settings,setting-truncate-cells-html,truncate_cells_html,"In the HTML table view, truncate any strings that are longer than this value.
The full value will still be available in CSV, JSON and on the individual row
HTML page. Set this to 0 to disable truncation.
datasette mydatabase.db --setting truncate_cells_html 0","[""Settings"", ""Settings""]",[]
settings:using-setting,settings,using-setting,Using --setting,"Datasette supports a number of settings. These can be set using the --setting name value option to datasette serve .
You can set multiple settings at once like this:
datasette mydatabase.db \
--setting default_page_size 50 \
--setting sql_time_limit_ms 3500 \
--setting max_returned_rows 2000
Settings can also be specified in the database.yaml configuration file .","[""Settings""]",[]
spatialite:installing-spatialite-on-linux,spatialite,installing-spatialite-on-linux,Installing SpatiaLite on Linux,"SpatiaLite is packaged for most Linux distributions.
apt install spatialite-bin libsqlite3-mod-spatialite
Depending on your distribution, you should be able to run Datasette something like this:
datasette --load-extension=/usr/lib/x86_64-linux-gnu/mod_spatialite.so
If you are unsure of the location of the module, try running locate mod_spatialite and see what comes back.","[""SpatiaLite"", ""Installation""]",[]
spatialite:querying-polygons-using-within,spatialite,querying-polygons-using-within,Querying polygons using within(),"The within() SQL function can be used to check if a point is within a geometry:
select
name
from
places
where
within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom);
The GeomFromText() function takes a string of well-known text. Note that the order used here is longitude then latitude .
To run that same within() query in a way that benefits from the spatial index, use the following:
select
name
from
places
where
within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom)
and rowid in (
SELECT pkid FROM idx_places_geom
where xmin < -3.1724366
and xmax > -3.1724366
and ymin < 51.4704448
and ymax > 51.4704448
);","[""SpatiaLite""]",[]
spatialite:spatial-indexing-latitude-longitude-columns,spatialite,spatial-indexing-latitude-longitude-columns,Spatial indexing latitude/longitude columns,"Here's a recipe for taking a table with existing latitude and longitude columns, adding a SpatiaLite POINT geometry column to that table, populating the new column and then populating a spatial index:
import sqlite3
conn = sqlite3.connect(""museums.db"")
# Lead the spatialite extension:
conn.enable_load_extension(True)
conn.load_extension(""/usr/local/lib/mod_spatialite.dylib"")
# Initialize spatial metadata for this database:
conn.execute(""select InitSpatialMetadata(1)"")
# Add a geometry column called point_geom to our museums table:
conn.execute(
""SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);""
)
# Now update that geometry column with the lat/lon points
conn.execute(
""""""
UPDATE museums SET
point_geom = GeomFromText('POINT('||""longitude""||' '||""latitude""||')',4326);
""""""
)
# Now add a spatial index to that column
conn.execute(
'select CreateSpatialIndex(""museums"", ""point_geom"");'
)
# If you don't commit your changes will not be persisted:
conn.commit()
conn.close()","[""SpatiaLite""]",[]
spatialite:spatialite-installation,spatialite,spatialite-installation,Installation,,"[""SpatiaLite""]",[]
sql_queries:canned-queries-json-api,sql_queries,canned-queries-json-api,JSON API for writable canned queries,"Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON.
To submit JSON to a writable canned query, encode key/value parameters as a JSON document:
POST /mydatabase/add_message
{""message"": ""Message goes here""}
You can also continue to submit data using regular form encoding, like so:
POST /mydatabase/add_message
message=Message+goes+here
There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page.
Set an Accept: application/json header on your request
Include ?_json=1 in the URL that you POST to
Include ""_json"": 1 in your JSON body, or &_json=1 in your form encoded body
The JSON response will look like this:
{
""ok"": true,
""message"": ""Query executed, 1 row affected"",
""redirect"": ""/data/add_name""
}
The ""message"" and ""redirect"" values here will take into account on_success_message , on_success_message_sql , on_success_redirect , on_error_message and on_error_redirect , if they have been set.","[""Running SQL queries"", ""Canned queries""]",[]
sql_queries:canned-queries-magic-parameters,sql_queries,canned-queries-magic-parameters,Magic parameters,"Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string.
These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query.
Available magic parameters are:
_actor_* - e.g. _actor_id , _actor_name
Fields from the currently authenticated Actors .
_header_* - e.g. _header_user_agent
Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language .
_cookie_* - e.g. _cookie_lang
The value of the incoming cookie of that name.
_now_epoch
The number of seconds since the Unix epoch.
_now_date_utc
The date in UTC, e.g. 2020-06-01
_now_datetime_utc
The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z
_random_chars_* - e.g. _random_chars_128
A random string of characters of the specified length.
Here's an example configuration that adds a message from the authenticated user, storing various pieces of additional metadata using magic parameters:
[[[cog
config_example(cog, """"""
databases:
mydatabase:
queries:
add_message:
allow:
id: ""*""
sql: |-
INSERT INTO messages (
user_id, message, datetime
) VALUES (
:_actor_id, :message, :_now_datetime_utc
)
write: true
"""""")
]]]
[[[end]]]
The form presented at /mydatabase/add_message will have just a field for message - the other parameters will be populated by the magic parameter mechanism.
Additional custom magic parameters can be added by plugins using the register_magic_parameters(datasette) hook.","[""Running SQL queries"", ""Canned queries""]",[]
sql_queries:canned-queries-options,sql_queries,canned-queries-options,Additional canned query options,Additional options can be specified for canned queries in the YAML or JSON configuration.,"[""Running SQL queries"", ""Canned queries""]",[]
sql_queries:canned-queries-writable,sql_queries,canned-queries-writable,Writable canned queries,"Canned queries by default are read-only. You can use the ""write"": true key to indicate that a canned query can write to the database.
See Access to specific canned queries for details on how to add permission checks to canned queries, using the ""allow"" key.
[[[cog
config_example(cog, {
""databases"": {
""mydatabase"": {
""queries"": {
""add_name"": {
""sql"": ""INSERT INTO names (name) VALUES (:name)"",
""write"": True
}
}
}
}
})
]]]
[[[end]]]
This configuration will create a page at /mydatabase/add_name displaying a form with a name field. Submitting that form will execute the configured INSERT query.
You can customize how Datasette represents success and errors using the following optional properties:
on_success_message - the message shown when a query is successful
on_success_message_sql - alternative to on_success_message : a SQL query that should be executed to generate the message
on_success_redirect - the path or URL the user is redirected to on success
on_error_message - the message shown when a query throws an error
on_error_redirect - the path or URL the user is redirected to on error
For example:
[[[cog
config_example(cog, {
""databases"": {
""mydatabase"": {
""queries"": {
""add_name"": {
""sql"": ""INSERT INTO names (name) VALUES (:name)"",
""params"": [""name""],
""write"": True,
""on_success_message_sql"": ""select 'Name inserted: ' || :name"",
""on_success_redirect"": ""/mydatabase/names"",
""on_error_message"": ""Name insert failed"",
""on_error_redirect"": ""/mydatabase"",
}
}
}
}
})
]]]
[[[end]]]
You can use ""params"" to explicitly list the named parameters that should be displayed as form fields - otherwise they will be automatically detected. ""params"" is not necessary in the above example, since without it ""name"" would be automatically detected from the query.
You can pre-populate form fields when the page first loads using a query string, e.g. /mydatabase/add_name?name=Prepopulated . The user will have to submit the form to execute the query.
If you specify a query in ""on_success_message_sql"" , that query will be executed after the main query. The first column of the first row return by that query will be displayed as a success message. Named parameters from the main query will be made available to the success message query as well.","[""Running SQL queries"", ""Canned queries""]",[]
sql_queries:hide-sql,sql_queries,hide-sql,hide_sql,"Canned queries default to displaying their SQL query at the top of the page. If the query is extremely long you may want to hide it by default, with a ""show"" link that can be used to make it visible.
Add the ""hide_sql"": true option to hide the SQL query by default.","[""Running SQL queries"", ""Canned queries"", ""Additional canned query options""]",[]
sql_queries:id1,sql_queries,id1,Canned queries,"As an alternative to adding views to your database, you can define canned queries inside your datasette.yaml file. Here's an example:
[[[cog
from metadata_doc import config_example, config_example
config_example(cog, {
""databases"": {
""sf-trees"": {
""queries"": {
""just_species"": {
""sql"": ""select qSpecies from Street_Tree_List""
}
}
}
}
})
]]]
[[[end]]]
Then run Datasette like this:
datasette sf-trees.db -m metadata.json
Each canned query will be listed on the database index page, and will also get its own URL at:
/database-name/canned-query-name
For the above example, that URL would be:
/sf-trees/just_species
You can optionally include ""title"" and ""description"" keys to show a title and description on the canned query page. As with regular table metadata you can alternatively specify ""description_html"" to have your description rendered as HTML (rather than having HTML special characters escaped).","[""Running SQL queries""]",[]
sql_queries:id2,sql_queries,id2,Pagination,"Datasette's default table pagination is designed to be extremely efficient. SQL OFFSET/LIMIT pagination can have a significant performance penalty once you get into multiple thousands of rows, as each page still requires the database to scan through every preceding row to find the correct offset.
When paginating through tables, Datasette instead orders the rows in the table by their primary key and performs a WHERE clause against the last seen primary key for the previous page. For example:
select rowid, * from Tree_List where rowid > 200 order by rowid limit 101
This represents page three for this particular table, with a page size of 100.
Note that we request 101 items in the limit clause rather than 100. This allows us to detect if we are on the last page of the results: if the query returns less than 101 rows we know we have reached the end of the pagination set. Datasette will only return the first 100 rows - the 101st is used purely to detect if there should be another page.
Since the where clause acts against the index on the primary key, the query is extremely fast even for records that are a long way into the overall pagination set.","[""Running SQL queries""]",[]
sql_queries:sql,sql_queries,sql,Running SQL queries,"Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks.
The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click ""View and edit SQL"" to open that query in the custom SQL editor.
Note that this interface is only available if the execute-sql permission is allowed. See Controlling the ability to execute arbitrary SQL .
Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button.
You can also retrieve the results of any query as JSON by adding .json to the base URL.",[],[]
testing_plugins:testing-plugins-datasette-test-instance,testing_plugins,testing-plugins-datasette-test-instance,Setting up a Datasette test instance,"The above example shows the easiest way to start writing tests against a Datasette instance:
from datasette.app import Datasette
import pytest
@pytest.mark.asyncio
async def test_plugin_is_installed():
datasette = Datasette(memory=True)
response = await datasette.client.get(""/-/plugins.json"")
assert response.status_code == 200
Creating a Datasette() instance like this as useful shortcut in tests, but there is one detail you need to be aware of. It's important to ensure that the async method .invoke_startup() is called on that instance. You can do that like this:
datasette = Datasette(memory=True)
await datasette.invoke_startup()
This method registers any startup(datasette) or prepare_jinja2_environment(env, datasette) plugins that might themselves need to make async calls.
If you are using await datasette.client.get() and similar methods then you don't need to worry about this - Datasette automatically calls invoke_startup() the first time it handles a request.","[""Testing plugins""]",[]
testing_plugins:testing-plugins-pdb,testing_plugins,testing-plugins-pdb,Using pdb for errors thrown inside Datasette,"If an exception occurs within Datasette itself during a test, the response returned to your plugin will have a response.status_code value of 500.
You can add pdb=True to the Datasette constructor to drop into a Python debugger session inside your test run instead of getting back a 500 response code. This is equivalent to running the datasette command-line tool with the --pdb option.
Here's what that looks like in a test function:
def test_that_opens_the_debugger_or_errors():
ds = Datasette([db_path], pdb=True)
response = await ds.client.get(""/"")
If you use this pattern you will need to run pytest with the -s option to avoid capturing stdin/stdout in order to interact with the debugger prompt.","[""Testing plugins""]",[]
testing_plugins:testing-plugins-register-in-test,testing_plugins,testing-plugins-register-in-test,Registering a plugin for the duration of a test,"When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this:
from datasette import hookimpl
from datasette.app import Datasette
from datasette.plugins import pm
import pytest
@pytest.mark.asyncio
async def test_using_test_plugin():
class TestPlugin:
__name__ = ""TestPlugin""
# Use hookimpl and method names to register hooks
@hookimpl
def register_routes(self):
return [
(r""^/error$"", lambda: 1 / 0),
]
pm.register(TestPlugin(), name=""undo"")
try:
# The test implementation goes here
datasette = Datasette()
response = await datasette.client.get(""/error"")
assert response.status_code == 500
finally:
pm.unregister(name=""undo"")
To reuse the same temporary plugin in multiple tests, you can register it inside a fixture in your conftest.py file like this:
from datasette import hookimpl
from datasette.app import Datasette
from datasette.plugins import pm
import pytest
import pytest_asyncio
@pytest_asyncio.fixture
async def datasette_with_plugin():
class TestPlugin:
__name__ = ""TestPlugin""
@hookimpl
def register_routes(self):
return [
(r""^/error$"", lambda: 1 / 0),
]
pm.register(TestPlugin(), name=""undo"")
try:
yield Datasette()
finally:
pm.unregister(name=""undo"")
Note the yield statement here - this ensures that the finally: block that unregisters the plugin is executed only after the test function itself has completed.
Then in a test:
@pytest.mark.asyncio
async def test_error(datasette_with_plugin):
response = await datasette_with_plugin.client.get(""/error"")
assert response.status_code == 500","[""Testing plugins""]",[]
writing_plugins:writing-plugins-building-urls,writing_plugins,writing-plugins-building-urls,Building URLs within plugins,"Plugins that define their own custom user interface elements may need to link to other pages within Datasette.
This can be a bit tricky if the Datasette instance is using the base_url configuration setting to run behind a proxy, since that can cause Datasette's URLs to include an additional prefix.
The datasette.urls object provides internal methods for correctly generating URLs to different pages within Datasette, taking any base_url configuration into account.
This object is exposed in templates as the urls variable, which can be used like this:
Back to the Homepage
See datasette.urls for full details on this object.","[""Writing plugins""]",[]
writing_plugins:writing-plugins-configuration,writing_plugins,writing-plugins-configuration,Writing plugins that accept configuration,"When you are writing plugins, you can access plugin configuration like this using the datasette plugin_config() method. If you know you need plugin configuration for a specific table, you can access it like this:
plugin_config = datasette.plugin_config(
""datasette-cluster-map"", database=""sf-trees"", table=""Street_Tree_List""
)
This will return the {""latitude_column"": ""lat"", ""longitude_column"": ""lng""} in the above example.
If there is no configuration for that plugin, the method will return None .
If it cannot find the requested configuration at the table layer, it will fall back to the database layer and then the root layer. For example, a user may have set the plugin configuration option inside datasette.yaml like so:
[[[cog
from metadata_doc import metadata_example
metadata_example(cog, {
""databases"": {
""sf-trees"": {
""plugins"": {
""datasette-cluster-map"": {
""latitude_column"": ""xlat"",
""longitude_column"": ""xlng""
}
}
}
}
})
]]]
[[[end]]]
In this case, the above code would return that configuration for ANY table within the sf-trees database.
The plugin configuration could also be set at the top level of datasette.yaml :
[[[cog
metadata_example(cog, {
""plugins"": {
""datasette-cluster-map"": {
""latitude_column"": ""xlat"",
""longitude_column"": ""xlng""
}
}
})
]]]
[[[end]]]
Now that datasette-cluster-map plugin configuration will apply to every table in every database.","[""Writing plugins""]",[]
writing_plugins:writing-plugins-tracing,writing_plugins,writing-plugins-tracing,Tracing plugin hooks,"The DATASETTE_TRACE_PLUGINS environment variable turns on detailed tracing showing exactly which hooks are being run. This can be useful for understanding how Datasette is using your plugin.
DATASETTE_TRACE_PLUGINS=1 datasette mydb.db
Example output:
actor_from_request:
{ 'datasette': ,
'request': }
Hook implementations:
[ >,
>,
>]
Results:
[{'id': 'root'}]","[""Writing plugins""]",[]
installation:installing-plugins,installation,installing-plugins,Installing plugins,"If you want to install plugins into your local Datasette Docker image you can do
so using the following recipe. This will install the plugins and then save a
brand new local image called datasette-with-plugins :
docker run datasetteproject/datasette \
pip install datasette-vega
docker commit $(docker ps -lq) datasette-with-plugins
You can now run the new custom image like so:
docker run -p 8001:8001 -v `pwd`:/mnt \
datasette-with-plugins \
datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db
You can confirm that the plugins are installed by visiting
http://127.0.0.1:8001/-/plugins
Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container:
docker run datasette-057a0 bash -c '
apt-get update &&
apt-get install ripgrep &&
pip install datasette-ripgrep'
docker commit $(docker ps -lq) datasette-with-ripgrep","[""Installation"", ""Advanced installation options"", ""Using Docker""]","[{""href"": ""http://127.0.0.1:8001/-/plugins"", ""label"": ""http://127.0.0.1:8001/-/plugins""}, {""href"": ""https://datasette.io/plugins/datasette-ripgrep"", ""label"": ""datasette-ripgrep""}]"
installation:loading-spatialite,installation,loading-spatialite,Loading SpatiaLite,"The datasetteproject/datasette image includes a recent version of the
SpatiaLite extension for SQLite. To load and enable that
module, use the following command:
docker run -p 8001:8001 -v `pwd`:/mnt \
datasetteproject/datasette \
datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \
--load-extension=spatialite
You can confirm that SpatiaLite is successfully loaded by visiting
http://127.0.0.1:8001/-/versions","[""Installation"", ""Advanced installation options"", ""Using Docker""]","[{""href"": ""http://127.0.0.1:8001/-/versions"", ""label"": ""http://127.0.0.1:8001/-/versions""}]"
plugin_hooks:plugin-hook-prepare-jinja2-environment,plugin_hooks,plugin-hook-prepare-jinja2-environment,"prepare_jinja2_environment(env, datasette)","env - jinja2 Environment
The template environment that is being prepared
datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name)
This hook is called with the Jinja2 environment that is used to evaluate
Datasette HTML templates. You can use it to do things like register custom
template filters , for
example:
from datasette import hookimpl
@hookimpl
def prepare_jinja2_environment(env):
env.filters[""uppercase""] = lambda u: u.upper()
You can now use this filter in your custom templates like so:
Table name: {{ table|uppercase }}
This function can return an awaitable function if it needs to run any async code.
Examples: datasette-edit-templates","[""Plugin hooks""]","[{""href"": ""http://jinja.pocoo.org/docs/2.10/api/#custom-filters"", ""label"": ""register custom\n template filters""}, {""href"": ""https://datasette.io/plugins/datasette-edit-templates"", ""label"": ""datasette-edit-templates""}]"
getting_started:getting-started-your-computer,getting_started,getting-started-your-computer,Using Datasette on your own computer,"First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command:
datasette path/to/database.db
This will start a web server on port 8001 - visit http://localhost:8001/
to access the web interface.
Add -o to open your browser automatically once Datasette has started:
datasette path/to/database.db -o
Use Chrome on OS X? You can run datasette against your browser history
like so:
datasette ~/Library/Application\ Support/Google/Chrome/Default/History --nolock
The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode.
Now visiting http://localhost:8001/History/downloads will show you a web
interface to browse your downloads data:
http://localhost:8001/History/downloads.json will return that data as
JSON:
{
""database"": ""History"",
""columns"": [
""id"",
""current_path"",
""target_path"",
""start_time"",
""received_bytes"",
""total_bytes"",
...
],
""rows"": [
[
1,
""/Users/simonw/Downloads/DropboxInstaller.dmg"",
""/Users/simonw/Downloads/DropboxInstaller.dmg"",
13097290269022132,
626688,
0,
...
]
]
}
http://localhost:8001/History/downloads.json?_shape=objects will return that data as
JSON in a more convenient format:
{
...
""rows"": [
{
""start_time"": 13097290269022132,
""interrupt_reason"": 0,
""hash"": """",
""id"": 1,
""site_url"": """",
""referrer"": ""https://www.dropbox.com/downloading?src=index"",
...
}
]
}","[""Getting started""]","[{""href"": ""http://localhost:8001/"", ""label"": ""http://localhost:8001/""}, {""href"": ""http://localhost:8001/History/downloads"", ""label"": ""http://localhost:8001/History/downloads""}, {""href"": ""http://localhost:8001/History/downloads.json"", ""label"": ""http://localhost:8001/History/downloads.json""}, {""href"": ""http://localhost:8001/History/downloads.json?_shape=objects"", ""label"": ""http://localhost:8001/History/downloads.json?_shape=objects""}]"
writing_plugins:writing-plugins-one-off,writing_plugins,writing-plugins-one-off,Writing one-off plugins,"The quickest way to start writing a plugin is to create a my_plugin.py file and drop it into your plugins/ directory. Here is an example plugin, which adds a new custom SQL function called hello_world() which takes no arguments and returns the string Hello world! .
from datasette import hookimpl
@hookimpl
def prepare_connection(conn):
conn.create_function(
""hello_world"", 0, lambda: ""Hello world!""
)
If you save this in plugins/my_plugin.py you can then start Datasette like this:
datasette serve mydb.db --plugins-dir=plugins/
Now you can navigate to http://localhost:8001/mydb and run this SQL:
select hello_world();
To see the output of your plugin.","[""Writing plugins""]","[{""href"": ""http://localhost:8001/mydb"", ""label"": ""http://localhost:8001/mydb""}]"
changelog:id88,changelog,id88,0.27 (2019-01-31),"New command: datasette plugins ( documentation ) shows you the currently installed list of plugins.
Datasette can now output newline-delimited JSON using the new ?_shape=array&_nl=on query string option.
Added documentation on The Datasette Ecosystem .
Now using Python 3.7.2 as the base for the official Datasette Docker image .","[""Changelog""]","[{""href"": ""http://ndjson.org/"", ""label"": ""newline-delimited JSON""}, {""href"": ""https://hub.docker.com/r/datasetteproject/datasette/"", ""label"": ""Datasette Docker image""}]"
changelog:id85,changelog,id85,0.28 (2019-05-19),A salmagundi of new features!,"[""Changelog""]","[{""href"": ""https://adamj.eu/tech/2019/01/18/a-salmagundi-of-django-alpha-announcements/"", ""label"": ""salmagundi""}]"
changelog:asgi,changelog,asgi,ASGI,"ASGI is the Asynchronous Server Gateway Interface standard. I've been wanting to convert Datasette into an ASGI application for over a year - Port Datasette to ASGI #272 tracks thirteen months of intermittent development - but with Datasette 0.29 the change is finally released. This also means Datasette now runs on top of Uvicorn and no longer depends on Sanic .
I wrote about the significance of this change in Porting Datasette to ASGI, and Turtles all the way down .
The most exciting consequence of this change is that Datasette plugins can now take advantage of the ASGI standard.","[""Changelog"", ""0.29 (2019-07-07)""]","[{""href"": ""https://asgi.readthedocs.io/"", ""label"": ""ASGI""}, {""href"": ""https://github.com/simonw/datasette/issues/272"", ""label"": ""Port Datasette to ASGI #272""}, {""href"": ""https://www.uvicorn.org/"", ""label"": ""Uvicorn""}, {""href"": ""https://github.com/huge-success/sanic"", ""label"": ""Sanic""}, {""href"": ""https://simonwillison.net/2019/Jun/23/datasette-asgi/"", ""label"": ""Porting Datasette to ASGI, and Turtles all the way down""}]"
plugin_hooks:plugin-asgi-wrapper,plugin_hooks,plugin-asgi-wrapper,asgi_wrapper(datasette),"Return an ASGI middleware wrapper function that will be applied to the Datasette ASGI application.
This is a very powerful hook. You can use it to manipulate the entire Datasette response, or even to configure new URL routes that will be handled by your own custom code.
You can write your ASGI code directly against the low-level specification, or you can use the middleware utilities provided by an ASGI framework such as Starlette .
This example plugin adds a x-databases HTTP header listing the currently attached databases:
from datasette import hookimpl
from functools import wraps
@hookimpl
def asgi_wrapper(datasette):
def wrap_with_databases_header(app):
@wraps(app)
async def add_x_databases_header(
scope, receive, send
):
async def wrapped_send(event):
if event[""type""] == ""http.response.start"":
original_headers = (
event.get(""headers"") or []
)
event = {
""type"": event[""type""],
""status"": event[""status""],
""headers"": original_headers
+ [
[
b""x-databases"",
"", "".join(
datasette.databases.keys()
).encode(""utf-8""),
]
],
}
await send(event)
await app(scope, receive, wrapped_send)
return add_x_databases_header
return wrap_with_databases_header
Examples: datasette-cors , datasette-pyinstrument , datasette-total-page-time","[""Plugin hooks""]","[{""href"": ""https://asgi.readthedocs.io/"", ""label"": ""ASGI""}, {""href"": ""https://www.starlette.io/middleware/"", ""label"": ""Starlette""}, {""href"": ""https://datasette.io/plugins/datasette-cors"", ""label"": ""datasette-cors""}, {""href"": ""https://datasette.io/plugins/datasette-pyinstrument"", ""label"": ""datasette-pyinstrument""}, {""href"": ""https://datasette.io/plugins/datasette-total-page-time"", ""label"": ""datasette-total-page-time""}]"
internals:internals-request,internals,internals-request,Request object,"The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties:
.scope - dictionary
The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification.
.method - string
The HTTP method for this request, usually GET or POST .
.url - string
The full URL for this request, e.g. https://latest.datasette.io/fixtures .
.scheme - string
The request scheme - usually https or http .
.headers - dictionary (str -> str)
A dictionary of incoming HTTP request headers. Header names have been converted to lowercase.
.cookies - dictionary (str -> str)
A dictionary of incoming cookies
.host - string
The host header from the incoming request, e.g. latest.datasette.io or localhost .
.path - string
The path of the request excluding the query string, e.g. /fixtures .
.full_path - string
The path of the request including the query string if one is present, e.g. /fixtures?sql=select+sqlite_version() .
.query_string - string
The query string component of the request, without the ? - e.g. name__contains=sam&age__gt=10 .
.args - MultiParams
An object representing the parsed query string parameters, see below.
.url_vars - dictionary (str -> str)
Variables extracted from the URL path, if that path was defined using a regular expression. See register_routes(datasette) .
.actor - dictionary (str -> Any) or None
The currently authenticated actor (see actors ), or None if the request is unauthenticated.
The object also has two awaitable methods:
await request.post_vars() - dictionary
Returns a dictionary of form variables that were submitted in the request body via POST . Don't forget to read about CSRF protection !
await request.post_body() - bytes
Returns the un-parsed body of a request submitted by POST - useful for things like incoming JSON data.
And a class method that can be used to create fake request objects for use in tests:
fake(path_with_query_string, method=""GET"", scheme=""http"", url_vars=None)
Returns a Request instance for the specified path and method. For example:
from datasette import Request
from pprint import pprint
request = Request.fake(
""/fixtures/facetable/"",
url_vars={""database"": ""fixtures"", ""table"": ""facetable""},
)
pprint(request.scope)
This outputs:
{'http_version': '1.1',
'method': 'GET',
'path': '/fixtures/facetable/',
'query_string': b'',
'raw_path': b'/fixtures/facetable/',
'scheme': 'http',
'type': 'http',
'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}","[""Internals for plugins""]","[{""href"": ""https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope"", ""label"": ""ASGI HTTP connection scope""}]"
plugin_hooks:plugin-hook-skip-csrf,plugin_hooks,plugin-hook-skip-csrf,"skip_csrf(datasette, scope)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
scope - dictionary
The ASGI scope for the incoming HTTP request.
This hook can be used to skip CSRF protection for a specific incoming request. For example, you might have a custom path at /submit-comment which is designed to accept comments from anywhere, whether or not the incoming request originated on the site and has an accompanying CSRF token.
This example will disable CSRF protection for that specific URL path:
from datasette import hookimpl
@hookimpl
def skip_csrf(scope):
return scope[""path""] == ""/submit-comment""
If any of the currently active skip_csrf() plugin hooks return True , CSRF protection will be skipped for the request.","[""Plugin hooks""]","[{""href"": ""https://asgi.readthedocs.io/en/latest/specs/www.html#http-connection-scope"", ""label"": ""ASGI scope""}]"
installation:installation-homebrew,installation,installation-homebrew,Using Homebrew,"If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal:
brew install datasette
This should install the latest version. You can confirm by running:
datasette --version
You can upgrade to the latest Homebrew packaged version using:
brew upgrade datasette
Once you have installed Datasette you can install plugins using the following:
datasette install datasette-vega
If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using:
datasette install -U datasette","[""Installation"", ""Basic installation""]","[{""href"": ""https://brew.sh/"", ""label"": ""Homebrew""}]"
spatialite:installing-spatialite-on-os-x,spatialite,installing-spatialite-on-os-x,Installing SpatiaLite on OS X,"The easiest way to install SpatiaLite on OS X is to use Homebrew .
brew update
brew install spatialite-tools
This will install the spatialite command-line tool and the mod_spatialite dynamic library.
You can now run Datasette like so:
datasette --load-extension=spatialite","[""SpatiaLite"", ""Installation""]","[{""href"": ""https://brew.sh/"", ""label"": ""Homebrew""}]"
plugin_hooks:plugin-hook-register-commands,plugin_hooks,plugin-hook-register-commands,register_commands(cli),"cli - the root Datasette Click command group
Use this to register additional CLI commands
Register additional CLI commands that can be run using datsette yourcommand ... . This provides a mechanism by which plugins can add new CLI commands to Datasette.
This example registers a new datasette verify file1.db file2.db command that checks if the provided file paths are valid SQLite databases:
from datasette import hookimpl
import click
import sqlite3
@hookimpl
def register_commands(cli):
@cli.command()
@click.argument(
""files"", type=click.Path(exists=True), nargs=-1
)
def verify(files):
""Verify that files can be opened by Datasette""
for file in files:
conn = sqlite3.connect(str(file))
try:
conn.execute(""select * from sqlite_master"")
except sqlite3.DatabaseError:
raise click.ClickException(
""Invalid database: {}"".format(file)
)
The new command can then be executed like so:
datasette verify fixtures.db
Help text (from the docstring for the function plus any defined Click arguments or options) will become available using:
datasette verify --help
Plugins can register multiple commands by making multiple calls to the @cli.command() decorator. Consult the Click documentation for full details on how to build a CLI command, including how to define arguments and options.
Note that register_commands() plugins cannot used with the --plugins-dir mechanism - they need to be installed into the same virtual environment as Datasette using pip install . Provided it has a setup.py file (see Packaging a plugin ) you can run pip install directly against the directory in which you are developing your plugin like so:
pip install -e path/to/my/datasette-plugin
Examples: datasette-auth-passwords , datasette-verify","[""Plugin hooks""]","[{""href"": ""https://click.palletsprojects.com/en/latest/commands/#callback-invocation"", ""label"": ""Click command group""}, {""href"": ""https://click.palletsprojects.com/"", ""label"": ""Click documentation""}, {""href"": ""https://datasette.io/plugins/datasette-auth-passwords"", ""label"": ""datasette-auth-passwords""}, {""href"": ""https://datasette.io/plugins/datasette-verify"", ""label"": ""datasette-verify""}]"
publish:publish-cloud-run,publish,publish-cloud-run,Publishing to Google Cloud Run,"Google Cloud Run allows you to publish data in a scale-to-zero environment, so your application will start running when the first request is received and will shut down again when traffic ceases. This means you only pay for time spent serving traffic.
Cloud Run is a great option for inexpensively hosting small, low traffic projects - but costs can add up for projects that serve a lot of requests.
Be particularly careful if your project has tables with large numbers of rows. Search engine crawlers that index a page for every row could result in a high bill.
The datasette-block-robots plugin can be used to request search engine crawlers omit crawling your site, which can help avoid this issue.
You will first need to install and configure the Google Cloud CLI tools by following these instructions .
You can then publish one or more SQLite database files to Google Cloud Run using the following command:
datasette publish cloudrun mydatabase.db --service=my-database
A Cloud Run service is a single hosted application. The service name you specify will be used as part of the Cloud Run URL. If you deploy to a service name that you have used in the past your new deployment will replace the previous one.
If you omit the --service option you will be asked to pick a service name interactively during the deploy.
You may need to interact with prompts from the tool. Many of the prompts ask for values that can be set as properties for the Google Cloud SDK if you want to avoid the prompts.
For example, the default region for the deployed instance can be set using the command:
gcloud config set run/region us-central1
You should replace us-central1 with your desired region . Alternately, you can specify the region by setting the CLOUDSDK_RUN_REGION environment variable.
Once it has finished it will output a URL like this one:
Service [my-service] revision [my-service-00001] has been deployed
and is serving traffic at https://my-service-j7hipcg4aq-uc.a.run.app
Cloud Run provides a URL on the .run.app domain, but you can also point your own domain or subdomain at your Cloud Run service - see mapping custom domains in the Cloud Run documentation for details.
See datasette publish cloudrun for the full list of options for this command.","[""Publishing data"", ""datasette publish""]","[{""href"": ""https://cloud.google.com/run/"", ""label"": ""Google Cloud Run""}, {""href"": ""https://datasette.io/plugins/datasette-block-robots"", ""label"": ""datasette-block-robots""}, {""href"": ""https://cloud.google.com/sdk/"", ""label"": ""these instructions""}, {""href"": ""https://cloud.google.com/sdk/docs/properties"", ""label"": ""set as properties for the Google Cloud SDK""}, {""href"": ""https://cloud.google.com/about/locations"", ""label"": ""region""}, {""href"": ""https://cloud.google.com/run/docs/mapping-custom-domains"", ""label"": ""mapping custom domains""}]"
changelog:v0-28-publish-cloudrun,changelog,v0-28-publish-cloudrun,datasette publish cloudrun,"Google Cloud Run is a brand new serverless hosting platform from Google, which allows you to build a Docker container which will run only when HTTP traffic is received and will shut down (and hence cost you nothing) the rest of the time. It's similar to Zeit's Now v1 Docker hosting platform which sadly is no longer accepting signups from new users.
The new datasette publish cloudrun command was contributed by Romain Primet ( #434 ) and publishes selected databases to a new Datasette instance running on Google Cloud Run.
See Publishing to Google Cloud Run for full documentation.","[""Changelog"", ""0.28 (2019-05-19)""]","[{""href"": ""https://cloud.google.com/run/"", ""label"": ""Google Cloud Run""}, {""href"": ""https://hyperion.alpha.spectrum.chat/zeit/now/cannot-create-now-v1-deployments~d206a0d4-5835-4af5-bb5c-a17f0171fb25?m=MTU0Njk2NzgwODM3OA=="", ""label"": ""no longer accepting signups""}, {""href"": ""https://github.com/simonw/datasette/pull/434"", ""label"": ""#434""}]"
contributing:contributing-upgrading-codemirror,contributing,contributing-upgrading-codemirror,Upgrading CodeMirror,"Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror:
Install the packages with:
npm i codemirror @codemirror/lang-sql
Build the bundle using the version number from package.json with:
node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \
-f iife \
-n cm \
-o datasette/static/cm-editor-6.0.1.bundle.js \
-p @rollup/plugin-node-resolve \
-p @rollup/plugin-terser
Update the version reference in the codemirror.html template.","[""Contributing""]","[{""href"": ""https://codemirror.net/"", ""label"": ""CodeMirror""}, {""href"": ""https://latest.datasette.io/fixtures"", ""label"": ""this page""}]"
facets:id1,facets,id1,Facets,"Datasette facets can be used to add a faceted browse interface to any database table.
With facets, tables are displayed along with a summary showing the most common values in specified columns.
These values can be selected to further filter the table.
Here's an example :
Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table.",[],"[{""href"": ""https://congress-legislators.datasettes.com/legislators/legislator_terms?_facet=type&_facet=party&_facet=state&_facet_size=10"", ""label"": ""an example""}]"
changelog:id23,changelog,id23,0.59.3 (2021-11-20),"Fixed numerous bugs when running Datasette behind a proxy with a prefix URL path using the base_url setting. A live demo of this mode is now available at datasette-apache-proxy-demo.datasette.io/prefix/ . ( #1519 , #838 )
?column__arraycontains= and ?column__arraynotcontains= table parameters now also work against SQL views. ( #448 )
?_facet_array=column no longer returns incorrect counts if columns contain the same value more than once.","[""Changelog""]","[{""href"": ""https://datasette-apache-proxy-demo.datasette.io/prefix/"", ""label"": ""datasette-apache-proxy-demo.datasette.io/prefix/""}, {""href"": ""https://github.com/simonw/datasette/issues/1519"", ""label"": ""#1519""}, {""href"": ""https://github.com/simonw/datasette/issues/838"", ""label"": ""#838""}, {""href"": ""https://github.com/simonw/datasette/issues/448"", ""label"": ""#448""}]"
ecosystem:ecosystem,ecosystem,ecosystem,The Datasette Ecosystem,"Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data.
These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality.
The Datasette project website includes a directory of plugins and a directory of tools:
Plugins directory on datasette.io
Tools directory on datasette.io",[],"[{""href"": ""https://datasette.io/"", ""label"": ""Datasette project website""}, {""href"": ""https://datasette.io/plugins"", ""label"": ""Plugins directory on datasette.io""}, {""href"": ""https://datasette.io/tools"", ""label"": ""Tools directory on datasette.io""}]"
changelog:id37,changelog,id37,0.53 (2020-12-10),"Datasette has an official project website now, at https://datasette.io/ . This release mainly updates the documentation to reflect the new site.
New ?column__arraynotcontains= table filter. ( #1132 )
datasette serve has a new --create option, which will create blank database files if they do not already exist rather than exiting with an error. ( #1135 )
New ?_header=off option for CSV export which omits the CSV header row, documented here . ( #1133 )
""Powered by Datasette"" link in the footer now links to https://datasette.io/ . ( #1138 )
Project news no longer lives in the README - it can now be found at https://datasette.io/news . ( #1137 )","[""Changelog""]","[{""href"": ""https://datasette.io/"", ""label"": ""https://datasette.io/""}, {""href"": ""https://github.com/simonw/datasette/issues/1132"", ""label"": ""#1132""}, {""href"": ""https://github.com/simonw/datasette/issues/1135"", ""label"": ""#1135""}, {""href"": ""https://github.com/simonw/datasette/issues/1133"", ""label"": ""#1133""}, {""href"": ""https://datasette.io/"", ""label"": ""https://datasette.io/""}, {""href"": ""https://github.com/simonw/datasette/issues/1138"", ""label"": ""#1138""}, {""href"": ""https://datasette.io/news"", ""label"": ""https://datasette.io/news""}, {""href"": ""https://github.com/simonw/datasette/issues/1137"", ""label"": ""#1137""}]"
installation:installation-datasette-desktop,installation,installation-datasette-desktop,Datasette Desktop for Mac,Datasette Desktop is a packaged Mac application which bundles Datasette together with Python and allows you to install and run Datasette directly on your laptop. This is the best option for local installation if you are not comfortable using the command line.,"[""Installation"", ""Basic installation""]","[{""href"": ""https://datasette.io/desktop"", ""label"": ""Datasette Desktop""}]"
writing_plugins:writing-plugins-designing-urls,writing_plugins,writing-plugins-designing-urls,Designing URLs for your plugin,"You can register new URL routes within Datasette using the register_routes(datasette) plugin hook.
Datasette's default URLs include these:
/dbname - database page
/dbname/tablename - table page
/dbname/tablename/pk - row page
See Pages and API endpoints and Introspection for more default URL routes.
To avoid accidentally conflicting with a database file that may be loaded into Datasette, plugins should register URLs using a /-/ prefix. For example, if your plugin adds a new interface for uploading Excel files you might register a URL route like this one:
/-/upload-excel
Try to avoid registering URLs that clash with other plugins that your users might have installed. There is no central repository of reserved URL paths (yet) but you can review existing plugins by browsing the plugins directory .
If your plugin includes functionality that relates to a specific database you could also register a URL route like this:
/dbname/-/upload-excel
Or for a specific table like this:
/dbname/tablename/-/modify-table-schema
Note that a row could have a primary key of - and this URL scheme will still work, because Datasette row pages do not ever have a trailing slash followed by additional path components.","[""Writing plugins""]","[{""href"": ""https://datasette.io/plugins"", ""label"": ""plugins directory""}]"
plugin_hooks:plugin-register-output-renderer,plugin_hooks,plugin-register-output-renderer,register_output_renderer(datasette),"datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name)
Registers a new output renderer, to output data in a custom format. The hook function should return a dictionary, or a list of dictionaries, of the following shape:
@hookimpl
def register_output_renderer(datasette):
return {
""extension"": ""test"",
""render"": render_demo,
""can_render"": can_render_demo, # Optional
}
This will register render_demo to be called when paths with the extension .test (for example /database.test , /database/table.test , or /database/table/row.test ) are requested.
render_demo is a Python function. It can be a regular function or an async def render_demo() awaitable function, depending on if it needs to make any asynchronous calls.
can_render_demo is a Python function (or async def function) which accepts the same arguments as render_demo but just returns True or False . It lets Datasette know if the current SQL query can be represented by the plugin - and hence influence if a link to this output format is displayed in the user interface. If you omit the ""can_render"" key from the dictionary every query will be treated as being supported by the plugin.
When a request is received, the ""render"" callback function is called with zero or more of the following arguments. Datasette will inspect your callback function and pass arguments that match its function signature.
datasette - Datasette class
For accessing plugin configuration and executing queries.
columns - list of strings
The names of the columns returned by this query.
rows - list of sqlite3.Row objects
The rows returned by the query.
sql - string
The SQL query that was executed.
query_name - string or None
If this was the execution of a canned query , the name of that query.
database - string
The name of the database.
table - string or None
The table or view, if one is being rendered.
request - Request object
The current HTTP request.
error - string or None
If an error occurred this string will contain the error message.
truncated - bool or None
If the query response was truncated - for example a SQL query returning more than 1,000 results where pagination is not available - this will be True .
view_name - string
The name of the current view being called. index , database , table , and row are the most important ones.
The callback function can return None , if it is unable to render the data, or a Response class that will be returned to the caller.
It can also return a dictionary with the following keys. This format is deprecated as-of Datasette 0.49 and will be removed by Datasette 1.0.
body - string or bytes, optional
The response body, default empty
content_type - string, optional
The Content-Type header, default text/plain
status_code - integer, optional
The HTTP status code, default 200
headers - dictionary, optional
Extra HTTP headers to be returned in the response.
An example of an output renderer callback function:
def render_demo():
return Response.text(""Hello World"")
Here is a more complex example:
async def render_demo(datasette, columns, rows):
db = datasette.get_database()
result = await db.execute(""select sqlite_version()"")
first_row = "" | "".join(columns)
lines = [first_row]
lines.append(""="" * len(first_row))
for row in rows:
lines.append("" | "".join(row))
return Response(
""\n"".join(lines),
content_type=""text/plain; charset=utf-8"",
headers={""x-sqlite-version"": result.first()[0]},
)
And here is an example can_render function which returns True only if the query results contain the columns atom_id , atom_title and atom_updated :
def can_render_demo(columns):
return {
""atom_id"",
""atom_title"",
""atom_updated"",
}.issubset(columns)
Examples: datasette-atom , datasette-ics , datasette-geojson , datasette-copyable","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-atom"", ""label"": ""datasette-atom""}, {""href"": ""https://datasette.io/plugins/datasette-ics"", ""label"": ""datasette-ics""}, {""href"": ""https://datasette.io/plugins/datasette-geojson"", ""label"": ""datasette-geojson""}, {""href"": ""https://datasette.io/plugins/datasette-copyable"", ""label"": ""datasette-copyable""}]"
plugin_hooks:plugin-register-routes,plugin_hooks,plugin-register-routes,register_routes(datasette),"datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name)
Register additional view functions to execute for specified URL routes.
Return a list of (regex, view_function) pairs, something like this:
from datasette import hookimpl, Response
import html
async def hello_from(request):
name = request.url_vars[""name""]
return Response.html(
""Hello from {}"".format(html.escape(name))
)
@hookimpl
def register_routes():
return [(r""^/hello-from/(?P.*)$"", hello_from)]
The view functions can take a number of different optional arguments. The corresponding argument will be passed to your function depending on its named parameters - a form of dependency injection.
The optional view function arguments are as follows:
datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
request - Request object
The current HTTP request.
scope - dictionary
The incoming ASGI scope dictionary.
send - function
The ASGI send function.
receive - function
The ASGI receive function.
The view function can be a regular function or an async def function, depending on if it needs to use any await APIs.
The function can either return a Response class or it can return nothing and instead respond directly to the request using the ASGI send function (for advanced uses only).
It can also raise the datasette.NotFound exception to return a 404 not found error, or the datasette.Forbidden exception for a 403 forbidden.
See Designing URLs for your plugin for tips on designing the URL routes used by your plugin.
Examples: datasette-auth-github , datasette-psutil","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-auth-github"", ""label"": ""datasette-auth-github""}, {""href"": ""https://datasette.io/plugins/datasette-psutil"", ""label"": ""datasette-psutil""}]"
changelog:v1-0-a4,changelog,v1-0-a4,1.0a4 (2023-08-21),"This alpha fixes a security issue with the /-/api API explorer. On authenticated Datasette instances (instances protected using plugins such as datasette-auth-passwords ) the API explorer interface could reveal the names of databases and tables within the protected instance. The data stored in those tables was not revealed.
For more information and workarounds, read the security advisory . The issue has been present in every previous alpha version of Datasette 1.0: versions 1.0a0, 1.0a1, 1.0a2 and 1.0a3.
Also in this alpha:
The new datasette plugins --requirements option outputs a list of currently installed plugins in Python requirements.txt format, useful for duplicating that installation elsewhere. ( #2133 )
Writable canned queries can now define a on_success_message_sql field in their configuration, containing a SQL query that should be executed upon successful completion of the write operation in order to generate a message to be shown to the user. ( #2138 )
The automatically generated border color for a database is now shown in more places around the application. ( #2119 )
Every instance of example shell script code in the documentation should now include a working copy button, free from additional syntax. ( #2140 )","[""Changelog""]","[{""href"": ""https://datasette.io/plugins/datasette-auth-passwords"", ""label"": ""datasette-auth-passwords""}, {""href"": ""https://github.com/simonw/datasette/security/advisories/GHSA-7ch3-7pp7-7cpq"", ""label"": ""the security advisory""}, {""href"": ""https://github.com/simonw/datasette/issues/2133"", ""label"": ""#2133""}, {""href"": ""https://github.com/simonw/datasette/issues/2138"", ""label"": ""#2138""}, {""href"": ""https://github.com/simonw/datasette/issues/2119"", ""label"": ""#2119""}, {""href"": ""https://github.com/simonw/datasette/issues/2140"", ""label"": ""#2140""}]"
plugin_hooks:plugin-hook-actor-from-request,plugin_hooks,plugin-hook-actor-from-request,"actor_from_request(datasette, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
request - Request object
The current HTTP request.
This is part of Datasette's authentication and permissions system . The function should attempt to authenticate an actor (either a user or an API actor of some sort) based on information in the request.
If it cannot authenticate an actor, it should return None . Otherwise it should return a dictionary representing that actor.
Here's an example that authenticates the actor based on an incoming API key:
from datasette import hookimpl
import secrets
SECRET_KEY = ""this-is-a-secret""
@hookimpl
def actor_from_request(datasette, request):
authorization = (
request.headers.get(""authorization"") or """"
)
expected = ""Bearer {}"".format(SECRET_KEY)
if secrets.compare_digest(authorization, expected):
return {""id"": ""bot""}
If you install this in your plugins directory you can test it like this:
curl -H 'Authorization: Bearer this-is-a-secret' http://localhost:8003/-/actor.json
Instead of returning a dictionary, this function can return an awaitable function which itself returns either None or a dictionary. This is useful for authentication functions that need to make a database query - for example:
from datasette import hookimpl
@hookimpl
def actor_from_request(datasette, request):
async def inner():
token = request.args.get(""_token"")
if not token:
return None
# Look up ?_token=xxx in sessions table
result = await datasette.get_database().execute(
""select count(*) from sessions where token = ?"",
[token],
)
if result.first()[0]:
return {""token"": token}
else:
return None
return inner
Examples: datasette-auth-tokens , datasette-auth-passwords","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-auth-tokens"", ""label"": ""datasette-auth-tokens""}, {""href"": ""https://datasette.io/plugins/datasette-auth-passwords"", ""label"": ""datasette-auth-passwords""}]"
configuration:configuration-cli,configuration,configuration-cli,Configuration via the command-line,"The recommended way to configure Datasette is using a datasette.yaml file passed to -c/--config . You can also pass individual settings to Datasette using the -s/--setting option, which can be used multiple times:
datasette mydatabase.db \
--setting settings.default_page_size 50 \
--setting settings.sql_time_limit_ms 3500
This option takes dotted-notation for the first argument and a value for the second argument. This means you can use it to set any configuration value that would be valid in a datasette.yaml file.
It also works for plugin configuration, for example for datasette-cluster-map :
datasette mydatabase.db \
--setting plugins.datasette-cluster-map.latitude_column xlat \
--setting plugins.datasette-cluster-map.longitude_column xlon
If the value you provide is a valid JSON object or list it will be treated as nested data, allowing you to configure plugins that accept lists such as datasette-proxy-url :
datasette mydatabase.db \
-s plugins.datasette-proxy-url.paths '[{""path"": ""/proxy"", ""backend"": ""http://example.com/""}]'
This is equivalent to a datasette.yaml file containing the following:
[[[cog
from metadata_doc import config_example
import textwrap
config_example(cog, textwrap.dedent(
""""""
plugins:
datasette-proxy-url:
paths:
- path: /proxy
backend: http://example.com/
"""""").strip()
)
]]]
[[[end]]]","[""Configuration""]","[{""href"": ""https://datasette.io/plugins/datasette-cluster-map"", ""label"": ""datasette-cluster-map""}, {""href"": ""https://datasette.io/plugins/datasette-proxy-url"", ""label"": ""datasette-proxy-url""}]"
cli-reference:cli-help-install-help,cli-reference,cli-help-install-help,datasette install,"Install new Datasette plugins. This command works like pip install but ensures that your plugins will be installed into the same environment as Datasette.
This command:
datasette install datasette-cluster-map
Would install the datasette-cluster-map plugin.
[[[cog
help([""install"", ""--help""])
]]]
Usage: datasette install [OPTIONS] [PACKAGES]...
Install plugins and packages from PyPI into the same environment as Datasette
Options:
-U, --upgrade Upgrade packages to latest version
-r, --requirement PATH Install from requirements file
-e, --editable TEXT Install a project in editable mode from this path
--help Show this message and exit.
[[[end]]]","[""CLI reference""]","[{""href"": ""https://datasette.io/plugins/datasette-cluster-map"", ""label"": ""datasette-cluster-map""}]"
plugin_hooks:plugin-hook-extra-body-script,plugin_hooks,plugin-hook-extra-body-script,"extra_body_script(template, database, table, columns, view_name, request, datasette)","Extra JavaScript to be added to a element:
@hookimpl
def extra_body_script():
return {
""module"": True,
""script"": ""console.log('Your JavaScript goes here...')"",
}
This will add the following to the end of your page:
Example: datasette-cluster-map","[""Plugin hooks"", ""Page extras""]","[{""href"": ""https://datasette.io/plugins/datasette-cluster-map"", ""label"": ""datasette-cluster-map""}]"
plugin_hooks:plugin-hook-query-actions,plugin_hooks,plugin-hook-query-actions,"query_actions(datasette, actor, database, query_name, request, sql, params)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
query_name - string or None
The name of the canned query, or None if this is an arbitrary SQL query.
request - Request object
The current HTTP request.
sql - string
The SQL query being executed
params - dictionary
The parameters passed to the SQL query, if any.
Populates a ""Query actions"" menu on the canned query and arbitrary SQL query pages.
This example adds a new query action linking to a page for explaining a query:
from datasette import hookimpl
import urllib
@hookimpl
def query_actions(datasette, database, query_name, sql):
# Don't explain an explain
if sql.lower().startswith(""explain""):
return
return [
{
""href"": datasette.urls.database(database)
+ ""?""
+ urllib.parse.urlencode(
{
""sql"": ""explain "" + sql,
}
),
""label"": ""Explain this query"",
""description"": ""Get a summary of how SQLite executes the query"",
},
]
Example: datasette-create-view","[""Plugin hooks"", ""Action hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-create-view"", ""label"": ""datasette-create-view""}]"
plugin_hooks:plugin-hook-row-actions,plugin_hooks,plugin-hook-row-actions,"row_actions(datasette, actor, request, database, table, row)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
request - Request object or None
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
row - sqlite.Row
The SQLite row object being displayed on the page.
Return links for the ""Row actions"" menu shown at the top of the row page.
This example displays the row in JSON plus some additional debug information if the user is signed in:
from datasette import hookimpl
@hookimpl
def row_actions(datasette, database, table, actor, row):
if actor:
return [
{
""href"": datasette.urls.instance(),
""label"": f""Row details for {actor['id']}"",
""description"": json.dumps(
dict(row), default=repr
),
},
]
Example: datasette-enrichments","[""Plugin hooks"", ""Action hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-enrichments"", ""label"": ""datasette-enrichments""}]"
plugin_hooks:plugin-hook-track-event,plugin_hooks,plugin-hook-track-event,"track_event(datasette, event)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
event - Event
Information about the event, represented as an instance of a subclass of the Event base class.
This hook will be called any time an event is tracked by code that calls the datasette.track_event(...) internal method.
The event object will always have the following properties:
name : a string representing the name of the event, for example logout or create-table .
actor : a dictionary representing the actor that triggered the event, or None if the event was not triggered by an actor.
created : a datatime.datetime object in the timezone.utc timezone representing the time the event object was created.
Other properties on the event will be available depending on the type of event. You can also access those as a dictionary using event.properties() .
The events fired by Datasette core are documented here .
This example plugin logs details of all events to standard error:
from datasette import hookimpl
import json
import sys
@hookimpl
def track_event(event):
name = event.name
actor = event.actor
properties = event.properties()
msg = json.dumps(
{
""name"": name,
""actor"": actor,
""properties"": properties,
}
)
print(msg, file=sys.stderr, flush=True)
The function can also return an async function which will be awaited. This is useful for writing to a database.
This example logs events to a datasette_events table in a database called events . It uses the startup() hook to create that table if it does not exist.
from datasette import hookimpl
import json
@hookimpl
def startup(datasette):
async def inner():
db = datasette.get_database(""events"")
await db.execute_write(
""""""
create table if not exists datasette_events (
id integer primary key,
event_type text,
created text,
actor text,
properties text
)
""""""
)
return inner
@hookimpl
def track_event(datasette, event):
async def inner():
db = datasette.get_database(""events"")
properties = event.properties()
await db.execute_write(
""""""
insert into datasette_events (event_type, created, actor, properties)
values (?, strftime('%Y-%m-%d %H:%M:%S', 'now'), ?, ?)
"""""",
(event.name, json.dumps(event.actor), json.dumps(properties)),
)
return inner
Example: datasette-events-db","[""Plugin hooks"", ""Event tracking""]","[{""href"": ""https://datasette.io/plugins/datasette-events-db"", ""label"": ""datasette-events-db""}]"
plugin_hooks:plugin-hook-database-actions,plugin_hooks,plugin-hook-database-actions,"database_actions(datasette, actor, database, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
request - Request object
The current HTTP request.
Populates an actions menu on the database page.
This example adds a new database action for creating a table, if the user has the edit-schema permission:
from datasette import hookimpl
@hookimpl
def database_actions(datasette, actor, database):
async def inner():
if not await datasette.permission_allowed(
actor,
""edit-schema"",
resource=database,
default=False,
):
return []
return [
{
""href"": datasette.urls.path(
""/-/edit-schema/{}/-/create"".format(
database
)
),
""label"": ""Create a table"",
}
]
return inner
Example: datasette-graphql , datasette-edit-schema","[""Plugin hooks"", ""Action hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-graphql"", ""label"": ""datasette-graphql""}, {""href"": ""https://datasette.io/plugins/datasette-edit-schema"", ""label"": ""datasette-edit-schema""}]"
plugin_hooks:plugin-hook-table-actions,plugin_hooks,plugin-hook-table-actions,"table_actions(datasette, actor, database, table, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
table - string
The name of the table.
request - Request object or None
The current HTTP request. This can be None if the request object is not available.
This example adds a new table action if the signed in user is ""root"" :
from datasette import hookimpl
@hookimpl
def table_actions(datasette, actor, database, table):
if actor and actor.get(""id"") == ""root"":
return [
{
""href"": datasette.urls.path(
""/-/edit-schema/{}/{}"".format(
database, table
)
),
""label"": ""Edit schema for this table"",
""description"": ""Add, remove, rename or alter columns for this table."",
}
]
Example: datasette-graphql","[""Plugin hooks"", ""Action hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-graphql"", ""label"": ""datasette-graphql""}]"
changelog:id17,changelog,id17,0.61.1 (2022-03-23),Fixed a bug where databases with a different route from their name (as used by the datasette-hashed-urls plugin ) returned errors when executing custom SQL queries. ( #1682 ),"[""Changelog""]","[{""href"": ""https://datasette.io/plugins/datasette-hashed-urls"", ""label"": ""datasette-hashed-urls plugin""}, {""href"": ""https://github.com/simonw/datasette/issues/1682"", ""label"": ""#1682""}]"
performance:performance-hashed-urls,performance,performance-hashed-urls,datasette-hashed-urls,"If you open a database file in immutable mode using the -i option, you can be assured that the content of that database will not change for the lifetime of the Datasette server.
The datasette-hashed-urls plugin implements an optimization where your database is served with part of the SHA-256 hash of the database contents baked into the URL.
A database at /fixtures will instead be served at /fixtures-aa7318b , and a year-long cache expiry header will be returned with those pages.
This will then be cached by both browsers and caching proxies such as Cloudflare or Fastly, providing a potentially significant performance boost.
To install the plugin, run the following:
datasette install datasette-hashed-urls
Prior to Datasette 0.61 hashed URL mode was a core Datasette feature, enabled using the hash_urls setting. This implementation has now been removed in favor of the datasette-hashed-urls plugin.
Prior to Datasette 0.28 hashed URL mode was the default behaviour for Datasette, since all database files were assumed to be immutable and unchanging. From 0.28 onwards the default has been to treat database files as mutable unless explicitly configured otherwise.","[""Performance and caching""]","[{""href"": ""https://datasette.io/plugins/datasette-hashed-urls"", ""label"": ""datasette-hashed-urls plugin""}]"
plugin_hooks:plugin-hook-filters-from-request,plugin_hooks,plugin-hook-filters-from-request,"filters_from_request(request, database, table, datasette)","request - Request object
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
This hook runs on the table page, and can influence the where clause of the SQL query used to populate that page, based on query string arguments on the incoming request.
The hook should return an instance of datasette.filters.FilterArguments which has one required and three optional arguments:
return FilterArguments(
where_clauses=[""id > :max_id""],
params={""max_id"": 5},
human_descriptions=[""max_id is greater than 5""],
extra_context={},
)
The arguments to the FilterArguments class constructor are as follows:
where_clauses - list of strings, required
A list of SQL fragments that will be inserted into the SQL query, joined by the and operator. These can include :named parameters which will be populated using data in params .
params - dictionary, optional
Additional keyword arguments to be used when the query is executed. These should match any :arguments in the where clauses.
human_descriptions - list of strings, optional
These strings will be included in the human-readable description at the top of the page and the page .
extra_context - dictionary, optional
Additional context variables that should be made available to the table.html template when it is rendered.
This example plugin causes 0 results to be returned if ?_nothing=1 is added to the URL:
from datasette import hookimpl
from datasette.filters import FilterArguments
@hookimpl
def filters_from_request(self, request):
if request.args.get(""_nothing""):
return FilterArguments(
[""1 = 0""], human_descriptions=[""NOTHING""]
)
Example: datasette-leaflet-freedraw","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-leaflet-freedraw"", ""label"": ""datasette-leaflet-freedraw""}]"
plugin_hooks:plugin-hook-permission-allowed,plugin_hooks,plugin-hook-permission-allowed,"permission_allowed(datasette, actor, action, resource)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary
The current actor, as decided by actor_from_request(datasette, request) .
action - string
The action to be performed, e.g. ""edit-table"" .
resource - string or None
An identifier for the individual resource, e.g. the name of the table.
Called to check that an actor has permission to perform an action on a resource. Can return True if the action is allowed, False if the action is not allowed or None if the plugin does not have an opinion one way or the other.
Here's an example plugin which randomly selects if a permission should be allowed or denied, except for view-instance which always uses the default permission scheme instead.
from datasette import hookimpl
import random
@hookimpl
def permission_allowed(action):
if action != ""view-instance"":
# Return True or False at random
return random.random() > 0.5
# Returning None falls back to default permissions
This function can alternatively return an awaitable function which itself returns True , False or None . You can use this option if you need to execute additional database queries using await datasette.execute(...) .
Here's an example that allows users to view the admin_log table only if their actor id is present in the admin_users table. It aso disallows arbitrary SQL queries for the staff.db database for all users.
@hookimpl
def permission_allowed(datasette, actor, action, resource):
async def inner():
if action == ""execute-sql"" and resource == ""staff"":
return False
if action == ""view-table"" and resource == (
""staff"",
""admin_log"",
):
if not actor:
return False
user_id = actor[""id""]
return await datasette.get_database(
""staff""
).execute(
""select count(*) from admin_users where user_id = :user_id"",
{""user_id"": user_id},
)
return inner
See built-in permissions for a full list of permissions that are included in Datasette core.
Example: datasette-permissions-sql","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-permissions-sql"", ""label"": ""datasette-permissions-sql""}]"
plugin_hooks:plugin-hook-render-cell,plugin_hooks,plugin-hook-render-cell,"render_cell(row, value, column, table, database, datasette, request)","Lets you customize the display of values within table cells in the HTML table view.
row - sqlite.Row
The SQLite row object that the value being rendered is part of
value - string, integer, float, bytes or None
The value that was loaded from the database
column - string
The name of the column being rendered
table - string or None
The name of the table - or None if this is a custom SQL query
database - string
The name of the database
datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
request - Request object
The current request object
If your hook returns None , it will be ignored. Use this to indicate that your hook is not able to custom render this particular value.
If the hook returns a string, that string will be rendered in the table cell.
If you want to return HTML markup you can do so by returning a jinja2.Markup object.
You can also return an awaitable function which returns a value.
Datasette will loop through all available render_cell hooks and display the value returned by the first one that does not return None .
Here is an example of a custom render_cell() plugin which looks for values that are a JSON string matching the following format:
{""href"": ""https://www.example.com/"", ""label"": ""Name""}
If the value matches that pattern, the plugin returns an HTML link element:
from datasette import hookimpl
import markupsafe
import json
@hookimpl
def render_cell(value):
# Render {""href"": ""..."", ""label"": ""...""} as link
if not isinstance(value, str):
return None
stripped = value.strip()
if not (
stripped.startswith(""{"") and stripped.endswith(""}"")
):
return None
try:
data = json.loads(value)
except ValueError:
return None
if not isinstance(data, dict):
return None
if set(data.keys()) != {""href"", ""label""}:
return None
href = data[""href""]
if not (
href.startswith(""/"")
or href.startswith(""http://"")
or href.startswith(""https://"")
):
return None
return markupsafe.Markup(
'{label}'.format(
href=markupsafe.escape(data[""href""]),
label=markupsafe.escape(data[""label""] or """")
or "" "",
)
)
Examples: datasette-render-binary , datasette-render-markdown , datasette-json-html","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-render-binary"", ""label"": ""datasette-render-binary""}, {""href"": ""https://datasette.io/plugins/datasette-render-markdown"", ""label"": ""datasette-render-markdown""}, {""href"": ""https://datasette.io/plugins/datasette-json-html"", ""label"": ""datasette-json-html""}]"
plugin_hooks:plugin-hook-startup,plugin_hooks,plugin-hook-startup,startup(datasette),"This hook fires when the Datasette application server first starts up.
Here is an example that validates required plugin configuration. The server will fail to start and show an error if the validation check fails:
@hookimpl
def startup(datasette):
config = datasette.plugin_config(""my-plugin"") or {}
assert (
""required-setting"" in config
), ""my-plugin requires setting required-setting""
You can also return an async function, which will be awaited on startup. Use this option if you need to execute any database queries, for example this function which creates the my_table database table if it does not yet exist:
@hookimpl
def startup(datasette):
async def inner():
db = datasette.get_database()
if ""my_table"" not in await db.table_names():
await db.execute_write(
""""""
create table my_table (mycol text)
""""""
)
return inner
Potential use-cases:
Run some initialization code for the plugin
Create database tables that a plugin needs on startup
Validate the configuration for a plugin on startup, and raise an error if it is invalid
If you are writing unit tests for a plugin that uses this hook and doesn't exercise Datasette by sending
any simulated requests through it you will need to explicitly call await ds.invoke_startup() in your tests. An example:
@pytest.mark.asyncio
async def test_my_plugin():
ds = Datasette()
await ds.invoke_startup()
# Rest of test goes here
Examples: datasette-saved-queries , datasette-init","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-saved-queries"", ""label"": ""datasette-saved-queries""}, {""href"": ""https://datasette.io/plugins/datasette-init"", ""label"": ""datasette-init""}]"
plugin_hooks:plugin-hook-canned-queries,plugin_hooks,plugin-hook-canned-queries,"canned_queries(datasette, database, actor)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
database - string
The name of the database.
actor - dictionary or None
The currently authenticated actor .
Use this hook to return a dictionary of additional canned query definitions for the specified database. The return value should be the same shape as the JSON described in the canned query documentation.
from datasette import hookimpl
@hookimpl
def canned_queries(datasette, database):
if database == ""mydb"":
return {
""my_query"": {
""sql"": ""select * from my_table where id > :min_id""
}
}
The hook can alternatively return an awaitable function that returns a list. Here's an example that returns queries that have been stored in the saved_queries database table, if one exists:
from datasette import hookimpl
@hookimpl
def canned_queries(datasette, database):
async def inner():
db = datasette.get_database(database)
if await db.table_exists(""saved_queries""):
results = await db.execute(
""select name, sql from saved_queries""
)
return {
result[""name""]: {""sql"": result[""sql""]}
for result in results
}
return inner
The actor parameter can be used to include the currently authenticated actor in your decision. Here's an example that returns saved queries that were saved by that actor:
from datasette import hookimpl
@hookimpl
def canned_queries(datasette, database, actor):
async def inner():
db = datasette.get_database(database)
if actor is not None and await db.table_exists(
""saved_queries""
):
results = await db.execute(
""select name, sql from saved_queries where actor_id = :id"",
{""id"": actor[""id""]},
)
return {
result[""name""]: {""sql"": result[""sql""]}
for result in results
}
return inner
Example: datasette-saved-queries","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-saved-queries"", ""label"": ""datasette-saved-queries""}]"
plugin_hooks:plugin-hook-menu-links,plugin_hooks,plugin-hook-menu-links,"menu_links(datasette, actor, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
request - Request object or None
The current HTTP request. This can be None if the request object is not available.
This hook allows additional items to be included in the menu displayed by Datasette's top right menu icon.
The hook should return a list of {""href"": ""..."", ""label"": ""...""} menu items. These will be added to the menu.
It can alternatively return an async def awaitable function which returns a list of menu items.
This example adds a new menu item but only if the signed in user is ""root"" :
from datasette import hookimpl
@hookimpl
def menu_links(datasette, actor):
if actor and actor.get(""id"") == ""root"":
return [
{
""href"": datasette.urls.path(
""/-/edit-schema""
),
""label"": ""Edit schema"",
},
]
Using datasette.urls here ensures that links in the menu will take the base_url setting into account.
Examples: datasette-search-all , datasette-graphql","[""Plugin hooks""]","[{""href"": ""https://datasette.io/plugins/datasette-search-all"", ""label"": ""datasette-search-all""}, {""href"": ""https://datasette.io/plugins/datasette-graphql"", ""label"": ""datasette-graphql""}]"
getting_started:getting-started-tutorial,getting_started,getting-started-tutorial,Follow a tutorial,"Datasette has several tutorials to help you get started with the tool. Try one of the following:
Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database.
Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data.
Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette.","[""Getting started""]","[{""href"": ""https://datasette.io/tutorials"", ""label"": ""tutorials""}, {""href"": ""https://datasette.io/tutorials/explore"", ""label"": ""Exploring a database with Datasette""}, {""href"": ""https://datasette.io/tutorials/learn-sql"", ""label"": ""Learn SQL with Datasette""}, {""href"": ""https://datasette.io/tutorials/clean-data"", ""label"": ""Cleaning data with sqlite-utils and Datasette""}, {""href"": ""https://sqlite-utils.datasette.io/"", ""label"": ""sqlite-utils""}]"
changelog:id12,changelog,id12,Documentation,"New tutorial: Cleaning data with sqlite-utils and Datasette .
Screenshots in the documentation are now maintained using shot-scraper , as described in Automating screenshots for the Datasette documentation using shot-scraper . ( #1844 )
More detailed command descriptions on the CLI reference page. ( #1787 )
New documentation on Running Datasette using OpenRC - thanks, Adam Simpson. ( #1825 )","[""Changelog"", ""0.63 (2022-10-27)""]","[{""href"": ""https://datasette.io/tutorials/clean-data"", ""label"": ""Cleaning data with sqlite-utils and Datasette""}, {""href"": ""https://shot-scraper.datasette.io/"", ""label"": ""shot-scraper""}, {""href"": ""https://simonwillison.net/2022/Oct/14/automating-screenshots/"", ""label"": ""Automating screenshots for the Datasette documentation using shot-scraper""}, {""href"": ""https://github.com/simonw/datasette/issues/1844"", ""label"": ""#1844""}, {""href"": ""https://github.com/simonw/datasette/issues/1787"", ""label"": ""#1787""}, {""href"": ""https://github.com/simonw/datasette/pull/1825"", ""label"": ""#1825""}]"
internals:internals-tilde-encoding,internals,internals-tilde-encoding,Tilde encoding,"Datasette uses a custom encoding scheme in some places, called tilde encoding . This is primarily used for table names and row primary keys, to avoid any confusion between / characters in those values and the Datasette URLs that reference them.
Tilde encoding uses the same algorithm as URL percent-encoding , but with the ~ tilde character used in place of % .
Any character other than ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz0123456789_- will be replaced by the numeric equivalent preceded by a tilde. For example:
/ becomes ~2F
. becomes ~2E
% becomes ~25
~ becomes ~7E
Space becomes +
polls/2022.primary becomes polls~2F2022~2Eprimary
Note that the space character is a special case: it will be replaced with a + symbol.
datasette.utils. tilde_encode s : str str
Returns tilde-encoded string - for example /foo/bar -> ~2Ffoo~2Fbar
datasette.utils. tilde_decode s : str str
Decodes a tilde-encoded string, so ~2Ffoo~2Fbar -> /foo/bar","[""Internals for plugins"", ""The datasette.utils module""]","[{""href"": ""https://developer.mozilla.org/en-US/docs/Glossary/percent-encoding"", ""label"": ""URL percent-encoding""}]"
plugin_hooks:plugin-hook-extra-js-urls,plugin_hooks,plugin-hook-extra-js-urls,"extra_js_urls(template, database, table, columns, view_name, request, datasette)","This takes the same arguments as extra_template_vars(...)
This works in the same way as extra_css_urls() but for JavaScript. You can
return a list of URLs, a list of dictionaries or an awaitable function that returns those things:
from datasette import hookimpl
@hookimpl
def extra_js_urls():
return [
{
""url"": ""https://code.jquery.com/jquery-3.3.1.slim.min.js"",
""sri"": ""sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"",
}
]
You can also return URLs to files from your plugin's static/ directory, if
you have one:
@hookimpl
def extra_js_urls():
return [""/-/static-plugins/your-plugin/app.js""]
Note that your-plugin here should be the hyphenated plugin name - the name that is displayed in the list on the /-/plugins debug page.
If your code uses JavaScript modules you should include the ""module"": True key. See Custom CSS and JavaScript for more details.
@hookimpl
def extra_js_urls():
return [
{
""url"": ""/-/static-plugins/your-plugin/app.js"",
""module"": True,
}
]
Examples: datasette-cluster-map , datasette-vega","[""Plugin hooks"", ""Page extras""]","[{""href"": ""https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules"", ""label"": ""JavaScript modules""}, {""href"": ""https://datasette.io/plugins/datasette-cluster-map"", ""label"": ""datasette-cluster-map""}, {""href"": ""https://datasette.io/plugins/datasette-vega"", ""label"": ""datasette-vega""}]"
changelog:javascript-modules,changelog,javascript-modules,JavaScript modules,"JavaScript modules were introduced in ECMAScript 2015 and provide native browser support for the import and export keywords.
To use modules, JavaScript needs to be included in
You can also specify a SRI (subresource integrity hash) for these assets:
[[[cog
config_example(cog, """"""
extra_css_urls:
- url: https://simonwillison.net/static/css/all.bf8cd891642c.css
sri: sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI
extra_js_urls:
- url: https://code.jquery.com/jquery-3.2.1.slim.min.js
sri: sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=
"""""")
]]]
[[[end]]]
This will produce:
Modern browsers will only execute the stylesheet or JavaScript if the SRI hash
matches the content served. You can generate hashes using www.srihash.org
Items in ""extra_js_urls"" can specify ""module"": true if they reference JavaScript that uses JavaScript modules . This configuration:
[[[cog
config_example(cog, """"""
extra_js_urls:
- url: https://example.datasette.io/module.js
module: true
"""""")
]]]
[[[end]]]
Will produce this HTML:
","[""Configuration"", null]","[{""href"": ""https://www.srihash.org/"", ""label"": ""www.srihash.org""}, {""href"": ""https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules"", ""label"": ""JavaScript modules""}]"
changelog:id82,changelog,id82,0.29.2 (2019-07-13),"Bumped Uvicorn to 0.8.4, fixing a bug where the query string was not included in the server logs. ( #559 )
Fixed bug where the navigation breadcrumbs were not displayed correctly on the page for a custom query. ( #558 )
Fixed bug where custom query names containing unicode characters caused errors.","[""Changelog""]","[{""href"": ""https://www.uvicorn.org/"", ""label"": ""Uvicorn""}, {""href"": ""https://github.com/simonw/datasette/issues/559"", ""label"": ""#559""}, {""href"": ""https://github.com/simonw/datasette/issues/558"", ""label"": ""#558""}]"