Commit 88e3ad39 authored by Daniele Venzano's avatar Daniele Venzano 🏇
Browse files

Merge branch 'devel/release' into 'master'

Update documentation in preparation for release

See merge request !59
parents 685924b6 9f1a8716
......@@ -2,8 +2,23 @@
## Version 2017.12
* Status page for the administrator
* New Docker Engine back-end, the Swarm back-end is now deprecated
* Nodes and ZApps can be labelled for constraining execution placement, for example to run ZApps only on nodes with GPUs
* Use non-reserved memory, cores, labels and image availability to take placement decisions
* The elastic scheduler is considered stable, the simple scheduler is now deprecated
* Expand the status page for the administrator
* More information about authentications in the log output of zoe-api
* Endpoint links in the web interface open in new windows
* Distinguish between reserved, allocated and in-use resources
* Allocate cores automatically, respecting the minimum configured in the ZApp
* Small graphic updates to the execution inspect web page
* Allow for more options and resource limits to be customized on the web interface (users or admins, not for guests), maximum limits are set in the zoe.conf file
* Additional volumes can be mounted by specifying them in the zoe.conf file
* Update unit and integration testing
* Elastic services that die are rescheduled on a new node
* Optional support for gathering usage metrics via KairosDB, for now these metrics are only used in the status page plots
* Fix UTC and timezone bugs for execution timestamps
* More configuration options for LDAP authentication
## Version 2017.09
......
version: '2'
services:
postgres:
image: postgres
gateway-socks:
image: zoerepo/gateway-socks
networks:
- zoe
image: postgres:9.3
zoe-api:
image: zoerepo/zoe
command: python3 zoe-api.py --debug --swarm ${SWARM_URL} --deployment-name compose --master-url tcp://zoe-master:4850 --dbuser postgres --dbhost postgres --dbname postgres
image: zoerepo/zoe-test
command: python3 zoe-api.py --debug --backend DockerEngine --backend-docker-config-file /etc/zoe/docker.conf --deployment-name compose --master-url tcp://zoe-master:4850 --dbuser postgres --dbhost postgres --dbname postgres
ports:
- "8080:5001"
depends_on:
- postgres
zoe-master:
image: zoerepo/zoe
image: zoerepo/zoe-test
ports:
- "4850:4850"
volumes:
- /etc/zoe:/etc/zoe
- /opt/zoe-workspaces:/mnt/zoe-workspaces
command: python3 zoe-master.py --debug --swarm ${SWARM_URL} --deployment-name compose --dbuser postgres --dbhost postgres --dbname postgres
command: python3 zoe-master.py --debug --backend DockerEngine --backend-docker-config-file /etc/zoe/docker.conf --deployment-name compose --dbuser postgres --dbhost postgres --dbname postgres
depends_on:
- zoe-api
networks:
......
......@@ -5,13 +5,13 @@ Architecture
The main Zoe Components are:
* zoe master: the core component that performs application scheduling and talks to Swarm
* zoe master: the core component that performs application scheduling and talks to the container back-end
* zoe api: the Zoe frontend, offering a web interface and a REST API
* command-line clients (zoe.py and zoe-admin.py)
The Zoe master is the core component of Zoe and communicates with the clients by using an internal ZeroMQ-based protocol. This protocol is designed to be robust, using the best practices from ZeroMQ documentation. A crash of the Api or of the Master process will not leave the other component inoperable, and when the faulted process restarts, work will restart where it was left.
In this architecture all state is kept in a Postgres database. With Zoe we try very hard not to reinvent the wheel and the internal state system we had in the previous architecture iteration was starting to show its limits.
In this architecture all application state is kept in a Postgres database. Platform state is kept in-memory and rebuilt at start time. A lot of care and tuning has been spent in keeping the vision Zoe has of the system and the real back-end state synchronised. In a few cases containers may be left orphaned: when Zoe deems it safe, they will be automatically cleaned-up, otherwise a warning in the logs will generated and the administrator has to examine the situation as, usually, it points to a bug hidden somewhere in the back-end code.
Users submit *execution requests*, composed by a name and an *application description*. The frontend process (Zoe api) informs the Zoe Master that a new execution request is available for execution.
Inside the Master, a scheduler keeps track of available resources and execution requests, and applies a
......
......@@ -55,24 +55,35 @@ API options:
* ``listen-port`` : port Zoe will use to listen for incoming connections to the web interface
* ``master-url = tcp://127.0.0.1:4850`` : address of the Zoe Master ZeroMQ API
* ``cookie-secret = changeme``: secret used to encrypt cookies
* ``zapp-shop-path = /var/lib/zoe-apps`` : path to the directory containing the ZApp Shop files
Master options:
* ``api-listen-uri = tcp://*:4850`` : ZeroMQ server connection string, used for the master listening endpoint
* ``kairosdb-enable = false`` : Enable gathering of usage metrics recorded in KairosDB
* ``kairosdb-url = http://localhost:8090`` : URL of KairosDB REST API
* ``overlay-network-name = zoe`` : name of the pre-configured Docker overlay network Zoe should use (Swarm backend)
* ``max-core-limit = 16`` : maximum amount of cores a user is able to reserve
* ``max-memory-limit = 64`` : maximum amount of memory a user is able to reserve
* ``no-user-edit-limits-web = False`` : if set to true, users are NOT allowed to modify ZApp reservations via the web interface
* ``additional-volumes = <none>`` : list of additional volumes to mount in every service, for every ZApp (ex. /mnt/data:data,/mnt/data_n:data_n)
Authentication:
* ``auth-type = text`` : Authentication type (text, ldap or ldapsasl)
* ``auth-file = zoepass.csv`` : Path to the CSV file containing user,pass,role lines for text authentication
* ``ldap-server-uri = ldap://localhost`` : LDAP server to use for user authentication
* ``ldap-bind-user = ou=something,dc=any,dc=local`` : LDAP user for binding to the server
* ``ldap-bind-password = mysecretpassword`` : Password for the bind user
* ``ldap-base-dn = ou=something,dc=any,dc=local`` : LDAP base DN for users
* ``ldap-admin-gid = 5000`` : LDAP group ID for admins
* ``ldap-user-gid = 5001`` : LDAP group ID for users
* ``ldap-guest-gid = 5002`` : LDAP group ID for guests
* ``ldap-group-name = gidNumber`` : LDAP user attribute that contains the group names/IDs
Scheduler options:
* ``scheduler-class = <ZoeSimpleScheduler | ZoeElasticScheduler>`` : Scheduler class to use for scheduling ZApps (default: simple scheduler)
* ``scheduler-class = <ZoeSimpleScheduler | ZoeElasticScheduler>`` : Scheduler class to use for scheduling ZApps (default: elastic scheduler)
* ``scheduler-policy = <FIFO | SIZE>`` : Scheduler policy to use for scheduling ZApps (default: FIFO)
Default options for the scheduler enable the traditional Zoe scheduler that was already available in the previous releases.
......@@ -81,23 +92,26 @@ ZApp shop:
* ``zapp-shop-path = /var/lib/zoe-apps`` : Path where ZApp folders are stored
Backend choice:
Back-end choice:
* ``backend = <Swarm|Kubernetes>`` : cluster back-end to use to run ZApps
* ``backend = <DockerEngine|Swarm|Kubernetes>`` : cluster back-end to use to run ZApps, default is DockerEngine
Swarm backend options:
Swarm back-end options:
* ``backend-swarm-url = zk://zk1:2181,zk2:2181,zk3:2181`` : connection string to the Swarm API endpoint. Can be expressed by a plain http URL or as a zookeeper node list in case Swarm is configured for HA.
* ``backend-swarm-zk-path = /docker`` : ZooKeeper path used by Docker Swarm
* ``backend-swarm-tls-cert = cert.pem`` : Docker TLS certificate file
* ``backend-swarm-tls-key = key.pem`` : Docker TLS private key file
* ``backend-swarm-tls-ca = ca.pem`` : Docker TLS CA certificate file
* ``overlay-network-name = zoe`` : name of the pre-configured Docker overlay network Zoe should use (Swarm backend)
Kubernetes backend:
Kubernetes back-end:
* ``kube-config-file = /opt/zoe/kube.conf`` : the configuration file of Kubernetes cluster that zoe works with. Specified if ``backend`` is ``Kubernetes``.
DockerEngine back-end:
* ``backend-docker-config-file = docker.conf`` : name of the DockerEngine back-end configuration file
Proxy options:
By default proxy support is disabled. To configure it refer to the :ref:`proxy documentation <proxy>`.
......@@ -13,7 +13,7 @@ Development repository
----------------------
Development happens at `Eurecom's GitLab repository <https://gitlab.eurecom.fr/zoe/main>`_. The GitHub repository is a read-only mirror.
The choice of GitLab over GitHub is due to the CI pipeline that we set-up to test Zoe.
The choice of GitLab over GitHub is due to the CI pipeline that we set-up to test Zoe. Please note the issue tracking happens on GitHub.
Bug reports and feature requests
--------------------------------
......@@ -22,10 +22,10 @@ Bug reports and feature requests are handled through the GitHub issue system at:
The issue system should be used for only for bug reports or feature requests. If you have more general questions, you need help setting up Zoe or running some application, please use the mailing list.
The mailing list
----------------
Mailing list
------------
The first step is to subscribe to the mailing list: `http://www.freelists.org/list/zoe <http://www.freelists.org/list/zoe>`_
The mailing list: `http://www.freelists.org/list/zoe <http://www.freelists.org/list/zoe>`_
Use the mailing list to stay up-to-date with what other developers are working on, to discuss and propose your ideas. We prefer small and incremental contributions, so it is important to keep in touch with the rest of the community to receive feedback. This way your contribution will be much more easily accepted.
......
.. _devel_backend:
Backend abstraction
===================
Back-end abstraction
====================
The container backend Zoe uses is configurable at runtime. Internally there is an API that Zoe, in particular the scheduler, uses to communicate with the container backend. This document explains the API, so that new backends can be created and maintained.
The container back-end Zoe uses is configurable at runtime. Internally there is an API that Zoe, in particular the scheduler, uses to communicate with the container back-end. This document explains the API, so that new back-ends can be created and maintained.
Zoe assumes backends are composed of multiple nodes. In case the backend is not clustered or does not expose per-node information, it can be implemented in Zoe as exposing a single node.
Zoe assumes back-ends are composed of multiple nodes. In case the back-end is not clustered or does not expose per-node information, it can be implemented in Zoe as exposing a single node.
Package structure
-----------------
Backends are written in Python and live in the ``zoe_master/backends/`` directory. Inside there is one Python package for each backend implementation.
Back-ends are written in Python and live in the ``zoe_master/backends/`` directory. Inside there is one Python package for each backend implementation.
To let Zoe use a new backend, its class must be imported in ``zoe_master/backends/interface.py`` and the ``_get_backend()`` function should be modified accordingly. Then the choices in ``zoe_lib/config.py`` for the configuration file should be expanded to include the new backend name.
To let Zoe use a new back-end, its class must be imported in ``zoe_master/backends/interface.py`` and the ``_get_backend()`` function should be modified accordingly. Then the choices in ``zoe_lib/config.py`` for the configuration file should be expanded to include the new back-end class name.
More options to the configuration file can be added to support the new backend. Use the ``--<backend name>-<option name>`` convention for them.
More options to the configuration file can be added to support the new backend. Use the ``--<backend name>-<option name>`` convention for them. If the new options do not fit the zoe.conf format, a separate configuration file can be used, like in the DockerEngine and Kubernetes cases.
API
---
Whenever Zoe needs to access the container backend it will create a new instance of the backend class. The class must be a child of ``zoe_master.backends.base.BaseBackend``.
Whenever Zoe needs to access the container back-end it will create a new instance of the back-end class. The class must be a child of ``zoe_master.backends.base.BaseBackend``. The class is not used as a singleton and may be instantiated concurrently, multiple times and in different threads.
.. autoclass:: zoe_master.backends.base.BaseBackend
:members:
......@@ -39,17 +39,7 @@ Variables
To run the tests a number of variables need to be set from the GitLab interface:
* REGISTRY_PASSWORD: the password used for authenticating with the registry via docker login
* SONARQUBE_SERVER_URL: the URL of the SonarQube server
* SONARQUBE_USER: the SonarQube user
* SSH_PRIVATE_KEY: private key to be used to deploy via rsync the staging build
* STAGING_IP: IP/hostname of the staging server
* WEB_STAGING_PATH: path for the web interface on the staging server
* ZOE_STAGING_PATH: path for Zoe on the staging server
* SWARM_URL: URL of a docker engine/swarm to run integration tests
SonarQube
---------
To run SonarQube against Zoe we use a special Docker image, `available on the Docker Hub <https://hub.docker.com/r/zoerepo/sonar-scanner/>`_.
You can also build it from the Dockerfile available at ``ci/gitlab-sonar-scanner/``, relative to the repository root.
......@@ -67,7 +67,6 @@ Internal module/class/method documentation
scheduler
backend
stats
jenkins-ci
gitlab-ci
integration_test
......
......@@ -3,65 +3,36 @@
Zoe Integration Tests
=====================
* Overview
Overview
--------
- Testing the zoe rest api in action.
- The backend could be swarm or kubernetes
The objective of integration testing is to run Zoe through a simple workflow to test basic functionality in an automated manner.
* What will it do
How it works
------------
- Launch two containers for zoe-api and zoe-master, one for postgresql
- Connect to the backend (kubernetes/swarm) and test the rest API of zoe.
- The authentication type is ``text`` for simplicity.
- The test would be described in a Jenkins job
- The whole process could be described in below steps:
- Build the container image for zoe. The tag is the $BUILD_ID from jenkins
- Deploy zoe with the new image, base on the docker-compose-test.yml
- Start the test for all api
- Generate coverage report
- Push the built image to the private registry
- Deploy zoe with the new image, base on the docker-compose-prod.yml
The integration tests are sun by GitLab CI, but can also be run by hand. Docker is used to guarantee reproducibility and a clean environment for each test run.
The job stops whenever one of the step above fails.
Two containers are used:
The last two steps could be optional if there’s no need to deploy zoe everytime.
* Standard Postgres 9.3
* Python 3.4 container with the Zoe code under test
* How to do it
Pytest will start a zoe-api and a zoe-master, then proceed querying the REST API via HTTP calls.
- Requirements:
* The DockerEngine back-end is used
* The authentication type is ``text`` for simplicity.
- A workable cluster. It could be Kubernetes or Swarm
- A private registry to push the built images.
- The runner for the integration test is contained in zoeci.py file
- Arguments explanation:
The code is under the ``integration_tests`` directory.
- argv[1]: 0: deploy, 1: build, 2: push
- args[2]: address for docker sock
- For build case:
- args[3]: private_registry_address/zoe:$BUILD_ID
- For deploy case:
- args[3]: docker-compose file location
- args[4]: private_registry_address/zoe:$BUILD_ID
What is being tested
--------------------
- Explanation on script for Jenkins job can be found on the document of continuous integration of Zoe.
The following endpoints are tested, with good and bad authentication information. Return status codes are checked for correctness.
* How to expand it?
* info
* userinfo
* execution start, list, terminate
* service list
- The initial infrastructure could be reused.
- Current tests of zoe use the unittest built-in library of Python, new library could be used based on the need.
- Current tests of zoe focus on testing the behaviors of the rest api:
- info
- userinfo
- execution
- service
- with two types of authentication:
- text
- cookie
- and two scenarios:
- success
- failure
- The new tests could be added into ``tests`` folder
A simple ZApp with an nginx web server is used for testing the execution start API.
.. _ci-jenkins:
Zoe Continuous Integration with Jenkins
=======================================
Overview
--------
- Integrate Zoe repository to Jenkins and SonarQube
- Each commit to Zoe repository trigger a build at Jenkins:
- Run SonarQube Scanner to analyze the codebase
- Create two containers for zoe-master, zoe-api
- Run integration test [testing rest api]
- Build new images if no errors happen
- Deploy Zoe with latest images
Software Stack
--------------
- Jenkins - version 2.7.4
- SonarQube - version 6.1
Configuration
-------------
- Jenkins: all the configurations in this section is configured on Jenkins Server
- Required:
- Plugins: Github plugin, SonarQube Plugin, Quality Gates, Email Plugin (optional), Cobertura Coverage Report (optional)
- Softwares: Java, Python, Docker
- Go to **Manage Jenkins**, then **Global Tool Configuration** to setup Java SDK, SonarQube Scanner
- SonarQube server configuration: this aims to connect Jenkins and SonarQube together
- Go to **Manage Jenkins** then **Configure System**
- SonarQube servers: input name, server URL, server version, **sever authentication token** (created on SonarQube Server)
- Quality Gates configuration:
- Go to **Manage Jenkins** then **Configure System**
- Quality Gates: input name, server URL, username and password to login into SonarQube server
- Github Servers configuration:
- Go to **Manage Jenkins** then **Configure System**
- Github: **Add Github Server**, the API URL would be ``https://api.github.com``. The credentials creation is well defined in the document of Github plugin:
- You can create your own [personal access token](https://github.com/settings/tokens/new) in your account GitHub settings.
- Token should be registered with scopes:
- admin:repo_hook - for managing hooks (read, write and delete old ones)
- repo - to see private repos
- repo:status - to manipulate commit statuses
- In Jenkins create credentials as «Secret Text», provided by Plain Credentials Plugin
- Create credentials for Github account: this is similar when you want to [connect to Github over SSH](https://help.github.com/articles/connecting-to-github-with-ssh/), here, beside adding your public key to Github, you also need to add your private key to Jenkins.
- Create SSH key pair on the machine run Jenkins:
- Add public key to Github
- Add private key to Jenkins credentials
- Create new item as a **freestyle project**: this aims to create a Jenkins job with the github repository
- General
- Select Github project
- Insert project URL
- Source Code Management
- Select **Git**
- Repositories
- Repository URL: use **SSH URL** of Github repository
- Credentials: select the one created above
- Build Triggers
- For Github plugin with version before 1.25.1: Select **Build when a change is pushed to Github**
- For Github plugin with version from 1.25.1: Select **GitHub hook trigger for GITScm polling**
- Build
- Add **Execute SonarQube Scanner** to do SonarQube Analysis
- Add **Quality Gates** to break the build when the SonarQube quality gate is not passed
- Add **Execute Shell** to run script for testing, deploying. Please refer to the Appendix section for the script.
- Post-build Actions [Optional]
- Add **Publish Covetura Coverage Report** for getting report from coverage. Due to the shell script in Appendix, the xml file generated by coverage is located at ``test`` folder, so, we should put ``**/tests/coverage.xml`` as the input of the field **Cobertura xml report pattern**.
- Add **E-mail Notification** for notifying when jobs finish
- Github
- Add new SSH key (the one created on Jenkins server)
- Go to the project (which is integrated to Jenkins) settings
- Integration & Services
- Add Service, choose **Jenkins (Github plugin)**
- Add Jenkins hook url
- For github plugin, this one would have the format: http://your-jenkins.com/github-webhook
- In case your Jenkins doens't expose to the world, try https://ngrok.com/
- SonarQube: all the configurations in this section is configured on SonarQube Server
- On **Administration**, go to **My Account**, then **Security**
- Generate Tokens, copy this and paste to **server authentication token** on Jenkins configuration
- The project needs to provides **sonar-properties** file in the repo:(http://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner)
- Then, on System then Update Center, install two plugins for Python and TypeScript.
Appendix
--------
- Sonar properties files
- Take a look at sonar-project.properties files in root, ``zoe_api``, ``zoe_master``, ``zoe_lib``, ``zoe_fe`` folders.
- Execute Shell Script
- Push this script inside the execute shell script of Jenkins job you created above, the zoe_rest_api can be changed in the ``test_config.py`` file.
::
# Run Style checker for Sphinx RST documentation
doc8 docs/
# Build new container images
python3 ci/zoeci.py 1 tcp://192.168.12.2:2375 192.168.12.2:5000/zoe:$BUILD_ID
# Deploy new zoe with the above images for testing
python3 ci/zoeci.py 0 tcp://192.168.12.2:2375 ci/docker-compose-test.yml 192.168.12.2:5000/zoe:$BUILD_ID
# Run integration test
cd tests
coverage run -p basic_auth_success_test.py
coverage run -p cookie_auth_success_test.py
coverage combine
coverage xml
cd ..
# Push the built images above to local registry
python3 ci/zoeci.py 2 tcp://192.168.12.2:2375 192.168.12.2:5000/zoe:$BUILD_ID
# Redeploy zoe with new images
python3 ci/zoeci.py 0 tcp://192.168.12.2:2375 ci/docker-compose-prod.yml 192.168.12.2:5000/zoe:$BUILD_ID
- Screenshots
- Jenkins Server configuration
- Plugin configuration
- Java SDK Configuration
.. image:: imgs/1.java.config.png
- SonarQube Scanner Configuration
.. image:: imgs/1.2.sonar.config.PNG
- SonarQube Server Configuration
.. image:: imgs/2.sonar.config.png
- Quality Gates Configuration
.. image:: imgs/2.1.sonar.quality.gates.png
- Github Server Configuration
.. image:: imgs/4.1.github.server.config.png
- Github Server Credential Creation
.. image:: imgs/4.1.github.server.credential.png
- Email Notification Configuration
.. image:: imgs/3.email.config.png
- Create Github credentials
.. image:: imgs/4.github.credential.png
- Create Freestyle project
.. image:: imgs/5.1.freestyle.project.png
.. image:: imgs/5.2.freestyle.project.png
.. image:: imgs/5.3.freestyle.project.png
.. image:: imgs/5.4.1.freestyle.project.png
.. image:: imgs/5.4.2.freestyle.project.png
.. image:: imgs/5.4.3.freestyle.project.png
.. image:: imgs/5.5.freestyle.project.png
- SonarQube Configuration
.. image:: imgs/6.sonar.token.png
- Github Repository Configuration
- Create webhook service
.. image:: imgs/7.github.repo.png
- Create access token
.. image:: imgs/7.1.github.access.token.png
......@@ -28,7 +28,7 @@ This endpoint does not need authentication. It returns general, static, informat
Will return a JSON document, like this::
{
"version" : "2017.06",
"version" : "2017.12",
"deployment_name" : "prod",
"application_format_version" : 3,
"api_version" : "0.7"
......@@ -215,6 +215,26 @@ Where:
* ``execution_id`` is the ID of the new execution just created.
Execution endpoints
^^^^^^^^^^^^^^^^^^^
Request (GET)::
curl -X GET -u 'username:password' http://bf5:8080/api/<api_version>/execution/endpoints/<execution_id>
Will return a JSON list like this::
[
['Jupyter Notebook interface', 'http://192.168.47.19:32920/'],
[...]
]
Where each item of the list is a tuple containing:
* The endpoint name
* The endpoint URL
Service endpoint
----------------
......@@ -311,7 +331,8 @@ Will return a JSON document, like this::
{
"termination_threads_count" : 0,
"queue_length" : 0
"queue_length" : 0,
[...]
}
Where:
......@@ -319,77 +340,7 @@ Where:
* ``termination_threads_count`` is the number of executions that are pending for termination and cleanup
* ``queue_length`` is the number of executions in the queue waiting to be started
OAuth2 endpoint
---------------
This endpoint aims to help users authenticate/authorize via an access token instead of raw username/password. It does need authentication when users require new access token. You can refresh an access token by a refresh token.
Request new access token
^^^^^^^^^^^^^^^^^^^^^^^^
Request::
curl -u 'username:password' http://bf5:8080/api/<api_version>/oauth/token -X POST -H 'Content-Type: application/json' -d '{"grant_type": "password"}'
Will return a JSON document, like this::
{
"token_type": "Bearer",
"access_token": "3ddbe9ba-6a21-4e4d-993b-70556390c5d3",
"refresh_token": "9bab190f-e211-42aa-917e-20ce987e355e",
"expires_in": 36000
}
Where:
* ``token_type`` is the type of the token, **Bearer** is used as default
* ``access_token`` is the token used for further authentication/authorization with others api endpoints
* ``refresh_token`` is the token used to get new access token when the current one has expired
* ``expires_in`` is the duration of time (second) when the access_token would be expired
Refresh an access token
^^^^^^^^^^^^^^^^^^^^^^^
Request::
curl -H 'Authorization: Bearer 9bab190f-e211-42aa-917e-20ce987e355e' http://bf5:8080/api/<api_version>/oauth/token -X POST -H 'Content-Type: application/json' -d '{"grant_type": "refresh_token"}'
Will return a JSON document, like this::
{
"token_type": "Bearer",
"access_token": "378f8d5f-2eb5-4181-b632-ad23c4534d32",
"expires_in": 36000
}
Where:
* ``access_token`` is the new access token after users issue a refresh
Revoke an access/refresh token
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Request::
curl -u 'usernam:password' -X DELETE http://bf5:8080/api/<api_version>/oauth/revoke/<token>
Where:
* ``token`` is the access token or refresh token needs to be revoked