Commit 1666c10d authored by Daniele Venzano's avatar Daniele Venzano

Remove GELF logging

parent f16c6073
......@@ -4,7 +4,7 @@ services:
image: 192.168.12.2:5000/zoe:reducetime
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: python3 zoe-api.py --debug --backend-swarm-url consul://192.168.12.2 --deployment-name prod --dbuser postgres --dbhost 192.168.12.2 --dbport 5432 --dbname postgres --dbpass postgres --overlay-network-name my-net --master-url tcp://zoe-master:4850 --auth-type ldap --ldap-server-uri ldap://172.17.0.6 --ldap-base-dn ou=users,dc=example,dc=com --proxy-type apache --proxy-container apache2 --proxy-config-file /etc/apache2/sites-available/all.conf --proxy-path fsdna.on.kpmg.de/zoe --gelf-address udp://192.168.12.2:5004
command: python3 zoe-api.py --debug --backend-swarm-url consul://192.168.12.2 --deployment-name prod --dbuser postgres --dbhost 192.168.12.2 --dbport 5432 --dbname postgres --dbpass postgres --overlay-network-name my-net --master-url tcp://zoe-master:4850 --auth-type ldap --ldap-server-uri ldap://172.17.0.6 --ldap-base-dn ou=users,dc=example,dc=com --proxy-type apache --proxy-container apache2 --proxy-config-file /etc/apache2/sites-available/all.conf --proxy-path fsdna.on.kpmg.de/zoe
ports:
- "5001:5001"
logging:
......@@ -16,7 +16,7 @@ services:
image: 192.168.12.2:5000/zoe:reducetime
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: python3 zoe-master.py --debug --backend-swarm-url consul://192.168.12.2 --deployment-name prod --dbuser postgres --dbhost 192.168.12.2 --dbport 5432 --dbname postgres --dbpass postgres --overlay-network-name my-net --auth-type ldap --ldap-server-uri ldap://ldapker.example.com --ldap-base-dn ou=users,dc=example,dc=com --proxy-type apache --proxy-container apache2 --proxy-config-file /etc/apache2/sites-available/all.conf --proxy-path fsdna.on.kpmg.de/zoe --gelf-address udp://192.168.12.2:5004
command: python3 zoe-master.py --debug --backend-swarm-url consul://192.168.12.2 --deployment-name prod --dbuser postgres --dbhost 192.168.12.2 --dbport 5432 --dbname postgres --dbpass postgres --overlay-network-name my-net --auth-type ldap --ldap-server-uri ldap://ldapker.example.com --ldap-base-dn ou=users,dc=example,dc=com --proxy-type apache --proxy-container apache2 --proxy-config-file /etc/apache2/sites-available/all.conf --proxy-path fsdna.on.kpmg.de/zoe
ports:
- "4850:4850"
depends_on:
......
......@@ -22,7 +22,6 @@ Common options:
* ``influxdb-dbname = zoe`` : Name of the InfluxDB database to use for storing metrics
* ``influxdb-url = http://localhost:8086`` : URL of the InfluxDB service (ex. )
* ``influxdb-enable = False`` : Enable metric output toward influxDB
* ``gelf-address = udp://1.2.3.4:1234`` : Enable Docker GELF log output to this destination
* ``workspace-base-path = /mnt/zoe-workspaces`` : Base directory where user workspaces will be created. This directory should reside on a shared filesystem visible by all Docker hosts.
* ``overlay-network-name = zoe`` : name of the pre-configured Docker overlay network Zoe should use
* ``backend = Swarm`` : ' Name of the backend to enable and use
......
......@@ -3,20 +3,9 @@
Container logs
==============
By default Zoe does not involve itself with the output from container processes. The logs can be retrieved with the usual Docker command ``docker logs`` while a container is alive, they are lost forever when the container is deleted. This solution however does not scale very well: to examine logs, users need to have access to the docker commandline tools and to the Swarm they are running in.
By design Zoe does not involve itself with the output from container processes. The logs can be retrieved with the usual Docker command ``docker logs`` while a container is alive, they are lost forever when the container is deleted. This solution however does not scale very well: to examine logs, users need to have access to the docker commandline tools and to the Swarm they are running in.
To setup a more convenient logging solution, Zoe provides the ``gelf-address`` option. With it, Zoe can configure Docker to send the container outputs to an external destination in GELF format. GELF is the richest format supported by Docker and can be ingested by a number of tools such as Graylog and Logstash. When that option is set all containers created by Zoe will send their output (standard output and standard error) to the destination specified. Docker is instructed to add all Zoe-defined tags to the GELF messages, so that they can be aggregated by Zoe execution, Zoe user, etc. A popular logging stack that supports GELF is `ELK <https://www.elastic.co/products>`_.
In production we recommend to configure your backend to manage the logs according to your policies. Docker Engines, for example, can be configured to send standard output and error to a remote destination in GELF format (others are supported), as soon as they are generated.
In our experience, web interfaces like Kibana or Graylog are not useful to the Zoe users: they want to quickly dig through logs of their executions to find an error or an interesting number to correlate to some other number in some other log. The web interfaces (option 1) are slow and cluttered compared to using grep on a text file (option 2).
A popular logging stack that supports GELF is `ELK <https://www.elastic.co/products>`_. However, in our experience, web interfaces like Kibana or Graylog are not useful to the Zoe users: they want to quickly dig through logs of their executions to find an error or an interesting number to correlate to some other number in some other log. The web interfaces are slow and cluttered compared to using grep on a text file.
Which alternative is good for you depends on the usage pattern of your users, your log auditing requirements, etc.
What if you want your logs to go through Kafka
----------------------------------------------
Zoe also provides a Zoe Logger process, in case you prefer to use Kafka in your log pipeline. Each container output will be sent to its own topic, that Kafka will retain for seven days by default. With Kafka you can also monitor the container output in real-time, for example to debug your container images running in Zoe. In this case GELF is converted to syslog-like format for easier handling.
The logger process is very small and simple, you can modify it to suit your needs and convert logs in any format to any destination you prefer. It lives in its own repository, here: https://github.com/DistributedSystemsGroup/zoe-logger
If you are interested in sending container output to Kafka, please make your voice heard at `this Docker issue <https://github.com/docker/docker/issues/21271>`_ for a more production-friendly Docker-Kafka integration.
Please note that the ``zoe-logger`` is more or less a toy and can be used as a starting point to develop a more robust and scalable solution. Also, it is currently unmaintained.
......@@ -62,7 +62,6 @@ def load_configuration(test_conf=None):
argparser.add_argument('--influxdb-dbname', help='Name of the InfluxDB database to use for storing metrics', default='zoe')
argparser.add_argument('--influxdb-url', help='URL of the InfluxDB service (ex. http://localhost:8086)', default='http://localhost:8086')
argparser.add_argument('--influxdb-enable', action="store_true", help='Enable metric output toward influxDB')
argparser.add_argument('--gelf-address', help='Enable Docker GELF log output to this destination (ex. udp://1.2.3.4:1234)', default='')
argparser.add_argument('--workspace-base-path', help='Path where user workspaces will be created by Zoe. Must be visible at this path on all Swarm hosts.', default='/mnt/zoe-workspaces')
argparser.add_argument('--workspace-deployment-path', help='Path appended to the workspace path to distinguish this deployment. If unspecified is equal to the deployment name.', default='--default--')
argparser.add_argument('--overlay-network-name', help='Name of the Swarm overlay network Zoe should use', default='zoe')
......
......@@ -164,20 +164,6 @@ class SwarmClient:
if port.expose:
port_bindings[str(port.number) + '/tcp'] = None
if get_conf().gelf_address != '':
log_config = {
"type": "gelf",
"config": {
'gelf-address': get_conf().gelf_address,
'labels': ",".join(service_instance.labels)
}
}
else:
log_config = {
"type": "json-file",
"config": {}
}
environment = {}
for name, value in service_instance.environment:
environment[name] = value
......@@ -195,7 +181,6 @@ class SwarmClient:
environment=environment,
hostname=service_instance.hostname,
labels=service_instance.labels,
log_config=log_config,
mem_limit=service_instance.memory_limit,
memswap_limit=service_instance.memory_limit,
name=service_instance.name,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment