{"ok": true, "next": null, "rows": [{"id": "publish:cli-package", "page": "publish", "ref": "cli-package", "title": "datasette package", "content": "If you have docker installed (e.g. using  Docker for Mac ) you can use the  datasette package  command to create a new Docker image in your local repository containing the datasette app bundled together with one or more SQLite databases: \n             datasette package mydatabase.db \n             Here's example output for the package command: \n             datasette package parlgov.db --extra-options=\"--setting sql_time_limit_ms 2500\"\nSending build context to Docker daemon  4.459MB\nStep 1/7 : FROM python:3.11.0-slim-bullseye\n ---> 79e1dc9af1c1\nStep 2/7 : COPY . /app\n ---> Using cache\n ---> cd4ec67de656\nStep 3/7 : WORKDIR /app\n ---> Using cache\n ---> 139699e91621\nStep 4/7 : RUN pip install datasette\n ---> Using cache\n ---> 340efa82bfd7\nStep 5/7 : RUN datasette inspect parlgov.db --inspect-file inspect-data.json\n ---> Using cache\n ---> 5fddbe990314\nStep 6/7 : EXPOSE 8001\n ---> Using cache\n ---> 8e83844b0fed\nStep 7/7 : CMD datasette serve parlgov.db --port 8001 --inspect-file inspect-data.json --setting sql_time_limit_ms 2500\n ---> Using cache\n ---> 1bd380ea8af3\nSuccessfully built 1bd380ea8af3 \n             You can now run the resulting container like so: \n             docker run -p 8081:8001 1bd380ea8af3 \n             This exposes port 8001 inside the container as port 8081 on your host machine, so you can access the application at  http://localhost:8081/ \n             You can customize the port that is exposed by the container using the  --port  option: \n             datasette package mydatabase.db --port 8080 \n             A full list of options can be seen by running  datasette package --help : \n             See  datasette package  for the full list of options for this command.", "breadcrumbs": "[\"Publishing data\"]", "references": "[{\"href\": \"https://www.docker.com/docker-mac\", \"label\": \"Docker for Mac\"}]"}, {"id": "publish:cli-publish", "page": "publish", "ref": "cli-publish", "title": "datasette publish", "content": "Once you have created a SQLite database (e.g. using  csvs-to-sqlite ) you can deploy it to a hosting account using a single command. \n             You will need a hosting account with  Heroku  or  Google Cloud . Once you have created your account you will need to install and configure the  heroku  or  gcloud  command-line tools.", "breadcrumbs": "[\"Publishing data\"]", "references": "[{\"href\": \"https://github.com/simonw/csvs-to-sqlite/\", \"label\": \"csvs-to-sqlite\"}, {\"href\": \"https://www.heroku.com/\", \"label\": \"Heroku\"}, {\"href\": \"https://cloud.google.com/\", \"label\": \"Google Cloud\"}]"}, {"id": "publish:publish-cloud-run", "page": "publish", "ref": "publish-cloud-run", "title": "Publishing to Google Cloud Run", "content": "Google Cloud Run  allows you to publish data in a scale-to-zero environment, so your application will start running when the first request is received and will shut down again when traffic ceases. This means you only pay for time spent serving traffic. \n                 \n                     Cloud Run is a great option for inexpensively hosting small, low traffic projects - but costs can add up for projects that serve a lot of requests. \n                     Be particularly careful if your project has tables with large numbers of rows. Search engine crawlers that index a page for every row could result in a high bill. \n                     The  datasette-block-robots  plugin can be used to request search engine crawlers omit crawling your site, which can help avoid this issue. \n                 \n                 You will first need to install and configure the Google Cloud CLI tools by following  these instructions . \n                 You can then publish one or more SQLite database files to Google Cloud Run using the following command: \n                 datasette publish cloudrun mydatabase.db --service=my-database \n                 A Cloud Run  service  is a single hosted application. The service name you specify will be used as part of the Cloud Run URL. If you deploy to a service name that you have used in the past your new deployment will replace the previous one. \n                 If you omit the  --service  option you will be asked to pick a service name interactively during the deploy. \n                 You may need to interact with prompts from the tool. Many of the prompts ask for values that can be  set as properties for the Google Cloud SDK  if you want to avoid the prompts. \n                 For example, the default region for the deployed instance can be set using the command: \n                 gcloud config set run/region us-central1 \n                 You should replace  us-central1  with your desired  region . Alternately, you can specify the region by setting the  CLOUDSDK_RUN_REGION  environment variable. \n                 Once it has finished it will output a URL like this one: \n                 Service [my-service] revision [my-service-00001] has been deployed\nand is serving traffic at https://my-service-j7hipcg4aq-uc.a.run.app \n                 Cloud Run provides a URL on the  .run.app  domain, but you can also point your own domain or subdomain at your Cloud Run service - see  mapping custom domains  in the Cloud Run documentation for details. \n                 See  datasette publish cloudrun  for the full list of options for this command.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://cloud.google.com/run/\", \"label\": \"Google Cloud Run\"}, {\"href\": \"https://datasette.io/plugins/datasette-block-robots\", \"label\": \"datasette-block-robots\"}, {\"href\": \"https://cloud.google.com/sdk/\", \"label\": \"these instructions\"}, {\"href\": \"https://cloud.google.com/sdk/docs/properties\", \"label\": \"set as properties for the Google Cloud SDK\"}, {\"href\": \"https://cloud.google.com/about/locations\", \"label\": \"region\"}, {\"href\": \"https://cloud.google.com/run/docs/mapping-custom-domains\", \"label\": \"mapping custom domains\"}]"}, {"id": "publish:publish-custom-metadata-and-plugins", "page": "publish", "ref": "publish-custom-metadata-and-plugins", "title": "Custom metadata and plugins", "content": "datasette publish  accepts a number of additional options which can be used to further customize your Datasette instance. \n                 You can define your own  Metadata  and deploy that with your instance like so: \n                 datasette publish cloudrun --service=my-service mydatabase.db -m metadata.json \n                 If you just want to set the title, license or source information you can do that directly using extra options to  datasette publish : \n                 datasette publish cloudrun mydatabase.db --service=my-service \\\n    --title=\"Title of my database\" \\\n    --source=\"Where the data originated\" \\\n    --source_url=\"http://www.example.com/\" \n                 You can also specify plugins you would like to install. For example, if you want to include the  datasette-vega  visualization plugin you can use the following: \n                 datasette publish cloudrun mydatabase.db --service=my-service --install=datasette-vega \n                 If a plugin has any  Secret configuration values  you can use the  --plugin-secret  option to set those secrets at publish time. For example, using Heroku with  datasette-auth-github  you might run the following command: \n                 datasette publish heroku my_database.db \\\n    --name my-heroku-app-demo \\\n    --install=datasette-auth-github \\\n    --plugin-secret datasette-auth-github client_id your_client_id \\\n    --plugin-secret datasette-auth-github client_secret your_client_secret", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-vega\", \"label\": \"datasette-vega\"}, {\"href\": \"https://github.com/simonw/datasette-auth-github\", \"label\": \"datasette-auth-github\"}]"}, {"id": "publish:publish-fly", "page": "publish", "ref": "publish-fly", "title": "Publishing to Fly", "content": "Fly  is a  competitively priced  Docker-compatible hosting platform that supports running applications in globally distributed data centers close to your end users. You can deploy Datasette instances to Fly using the  datasette-publish-fly  plugin. \n                 pip install datasette-publish-fly\ndatasette publish fly mydatabase.db --app=\"my-app\" \n                 Consult the  datasette-publish-fly README  for more details.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://fly.io/\", \"label\": \"Fly\"}, {\"href\": \"https://fly.io/docs/pricing/\", \"label\": \"competitively priced\"}, {\"href\": \"https://github.com/simonw/datasette-publish-fly\", \"label\": \"datasette-publish-fly\"}, {\"href\": \"https://github.com/simonw/datasette-publish-fly/blob/main/README.md\", \"label\": \"datasette-publish-fly README\"}]"}, {"id": "publish:publish-heroku", "page": "publish", "ref": "publish-heroku", "title": "Publishing to Heroku", "content": "To publish your data using  Heroku , first create an account there and install and configure the  Heroku CLI tool . \n                 You can publish one or more databases to Heroku using the following command: \n                 datasette publish heroku mydatabase.db \n                 This will output some details about the new deployment, including a URL like this one: \n                 https://limitless-reef-88278.herokuapp.com/ deployed to Heroku \n                 You can specify a custom app name by passing  -n my-app-name  to the publish command. This will also allow you to overwrite an existing app. \n                 Rather than deploying directly you can use the  --generate-dir  option to output the files that would be deployed to a directory: \n                 datasette publish heroku mydatabase.db --generate-dir=/tmp/deploy-this-to-heroku \n                 See  datasette publish heroku  for the full list of options for this command.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://www.heroku.com/\", \"label\": \"Heroku\"}, {\"href\": \"https://devcenter.heroku.com/articles/heroku-cli\", \"label\": \"Heroku CLI tool\"}]"}, {"id": "publish:publish-vercel", "page": "publish", "ref": "publish-vercel", "title": "Publishing to Vercel", "content": "Vercel   - previously known as Zeit Now - provides a layer over AWS Lambda to allow for quick, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the  datasette-publish-vercel  plugin. \n                 pip install datasette-publish-vercel\ndatasette publish vercel mydatabase.db --project my-database-project \n                 Not every feature is supported: consult the  datasette-publish-vercel README  for more details.", "breadcrumbs": "[\"Publishing data\", \"datasette publish\"]", "references": "[{\"href\": \"https://vercel.com/\", \"label\": \"Vercel\"}, {\"href\": \"https://github.com/simonw/datasette-publish-vercel\", \"label\": \"datasette-publish-vercel\"}, {\"href\": \"https://github.com/simonw/datasette-publish-vercel/blob/main/README.md\", \"label\": \"datasette-publish-vercel README\"}]"}], "truncated": false}