Monasca installation and configuration guide

In this tutorial, i will describe how to setup monasca components in detail. Before we start, something needs to be confirmed:

  • All the components of monasca can be installed on one node, such as on openstack controller node, or you can deploy it in multi-nodes. In this tutorial, i will install monasca-api in a new VM created in my openstack cluster, which has a floating ip associated. Monasca-agent was installed on controller node. The agent node post metrics to api node through floating ip. the are in the same sub net.
  • All the user name and password in this tutorial is monasca and qydcos. change it to yours.
  • the installation will be performed on ubuntu 14.04 openstack Mitaka version, for liberty, special settings must be made and be described later.
  • All the file related in this tutorial are here, clone it before you start.

    1, install packages and tools we needed.

    apt-get install -y git
    apt-get install openjdk-7-jre-headless python-pip python-dev
    

    2, install mysql database
    if you install monasca-api in openstack controller node, you can skip installing it, use the msyql already installed for openstack services.

    apt-get install -y mysql-server
    

    create monasca database schema, download mon_mysql here, the schema file in github has a bug, and it can not create notification, i have fixed it here. remember to modify the user name and password in line 234,235 of mon_mysql.sql to yours.

    mysql -uroot -ppassword < mon_mysql.sql
    

    3, install zookeeper
    install zookeeper and restart it. i use localhost interface and only one zookeeper, so the default configuration file needs nothing to be configured.

    apt-get install -y zookeeper zookeeperd zookeeper-bin
    service zookeeper restart
    

    4, install and configure kafka

    wget http://apache.mirrors.tds.net/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz
    mv kafka_2.9.2-0.8.1.1.tgz /opt
    cd /opt
    tar zxf kafka_2.9.2-0.8.1.1.tgz
    ln -s /opt/kafka_2.9.2-0.8.1.1/ /opt/kafka
    ln -s /opt/kafka/config /etc/kafka
    

    create kafka system user, kafka service will started as this user

    useradd kafka -U -r
    

    create kafka startup scripts in /etc/init/kafka.conf, copy following contents into /etc/init/kafka.conf and save it.

    description "Kafka"
    
    start on runlevel [2345]
    stop on runlevel [!2345]
    
    respawn
    
    limit nofile 32768 32768
    
    # If zookeeper is running on this box also give it time to start up properly
    pre-start script
        if [ -e /etc/init.d/zookeeper ]; then
            /etc/init.d/zookeeper restart
        fi
    end script
    
    # Rather than using setuid/setgid sudo is used because the pre-start task must run as root
    exec sudo -Hu kafka -g kafka KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" JMX_PORT=9997 /opt/kafka/bin/kafka-server-start.sh /etc/kafka/server.properties
    

    configure kafka, vim /etc/kafka/server.properties, make sure the following contents is configured

     host.name=localhost
     advertised.host.name=localhost
     log.dirs=/var/kafka
    

    create kafka log dirs.

    mkdir /var/kafka
    mkdir /var/log/kafka
    chown -R kafka. /var/kafka/
    chown -R kafka. /var/log/kafka/
    

    start kafka service

    service kafka start
    

    the next step is to create kafka topics.

    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 64 --topic metrics
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic events
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic raw-events
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transformed-events
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-definitions
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transform-definitions
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-state-transitions
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-notifications
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-notifications
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic retry-notifications
    

    5, install and configure influxdb

    curl -sL https://repos.influxdata.com/influxdb.key | apt-key add -
    echo "deb https://repos.influxdata.com/ubuntu trusty stable" > /etc/apt/sources.list.d/influxdb.list
    apt-get update
    apt-get install -y apt-transport-https
    apt-get install -y influxdb
    
    service influxdb start
    

    create influxdb database, user, password, retention policy, change password to yours.

    influx
    CREATE DATABASE mon
    CREATE USER monasca WITH PASSWORD 'qydcos'
    CREATE RETENTION POLICY persister_all ON mon DURATION 90d REPLICATION 1 DEFAULT
    exit
    

    6, install and configure storm

    wget http://apache.mirrors.tds.net/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz
    mkdir /opt/storm
    cp apache-storm-0.9.6.tar.gz /opt/storm/
    cd /opt/storm/
    tar xzf apache-storm-0.9.6.tar.gz
    ln -s /opt/storm/apache-storm-0.9.6 /opt/storm/current
    
    useradd storm -U -r
    mkdir /var/storm
    mkdir /var/log/storm
    chown -R storm. /var/storm/
    chown -R storm. /var/log/storm/
    

    modify storm.yaml as follow, vim current/storm/conf/storm.yaml

    ### base
    java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib"
    storm.local.dir: "/var/storm"
    
    ### zookeeper.*
    storm.zookeeper.servers:
        - "localhost"
    storm.zookeeper.port:  2181
    storm.zookeeper.retry.interval: 5000
    storm.zookeeper.retry.times: 29
    storm.zookeeper.root: "/storm"
    storm.zookeeper.session.timeout: 30000
    
    ### supervisor.* configs are for node supervisors
    supervisor.slots.ports:
        - 6701
        - 6702
        - 6703
        - 6704
    supervisor.childopts: "-Xmx1024m"
    
    ### worker.* configs are for task workers
    worker.childopts: "-Xmx1280m -XX:+UseConcMarkSweepGC -Dcom.sun.management.jmxremote"
    
    ### nimbus.* configs are for the masteri
    nimbus.host: "localhost"
    nimbus.thrift.port: 6627
    mbus.childopts: "-Xmx1024m"
    
    ### ui.* configs are for the master
    ui.host: 127.0.0.1
    ui.port: 8078
    ui.childopts: "-Xmx768m"
    
    ### drpc.* configs
    
    ### transactional.* configs
    transactional.zookeeper.servers:
        - "localhost"
    transactional.zookeeper.port: 2181
    transactional.zookeeper.root: "/storm-transactional"
    
    ### topology.* configs are for specific executing storms
    topology.acker.executors: 1
    topology.debug: false
    
    logviewer.port: 8077
    logviewer.childopts: "-Xmx128m"
    

    create storm supervisor startup scripts, vim /etc/init/storm-supervisor.conf

    # Startup script for Storm Supervisor
    
    description "Storm Supervisor daemon"
    start on runlevel [2345]
    
    console log
    respawn
    
    kill timeout 240
    respawn limit 25 5
    
    setgid storm
    setuid storm
    chdir /opt/storm/current
    exec /opt/storm/current/bin/storm supervisor
    

    create storm nimbus scripts.vim /etc/init/storm-nimbus.conf

    # Startup script for Storm Nimbus
    
    description "Storm Nimbus daemon"
    start on runlevel [2345]
    
    console log
    respawn
    
    kill timeout 240
    respawn limit 25 5
    
    setgid storm
    setuid storm
    chdir /opt/storm/current
    exec /opt/storm/current/bin/storm nimbus
    

    start storm supervisor and nimbus

    service storm-supervisor start
    service storm-nimbus start
    

    7, install monasca api python packages
    some monasca components have both python and java code available, mainly i choose python code to deploy.

    pip install monasca-common
    pip install gunicorn
    pip install greenlet  # Required for both
    pip install eventlet  # For eventlet workers
    pip install gevent    # For gevent workers
    pip install monasca-api
    pip install influxdb
    

    vim /etc/monasca/api-config.ini , modify host to your ip address

    [DEFAULT]
    name = monasca_api
    
    [pipeline:main]
    # Add validator in the pipeline so the metrics messages can be validated.
    pipeline = auth keystonecontext api
    
    [app:api]
    paste.app_factory = monasca_api.api.server:launch
    
    [filter:auth]
    paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    
    [filter:keystonecontext]
    paste.filter_factory = monasca_api.middleware.keystone_context_filter:filter_factory
    
    [server:main]
    use = egg:gunicorn#main
    host = 192.168.2.23
    port = 8082
    workers = 1
    proc_name = monasca_api
    

    vim /etc/monasca/api-config.conf, modify the following contents

    [DEFAULT]
    # logging, make sure that the user under whom the server runs has permission
    # to write to the directory.
    log_file = monasca-api.log
    log_dir = /var/log/monasca/api/
    debug=False
    region = RegionOne
    [security]
    # The roles that are allowed full access to the API.
    default_authorized_roles = admin, user, domainuser, domainadmin, monasca-user
    
    # The roles that are allowed to only POST metrics to the API. This role would be used by the Monasca Agent.
    agent_authorized_roles = admin
    
    # The roles that are allowed to only GET metrics from the API.
    read_only_authorized_roles = admin
    
    # The roles that are allowed to access the API on behalf of another tenant.
    # For example, a service can POST metrics to another tenant if they are a member of the "delegate" role.
    delegate_authorized_roles = admin
    
    [kafka]
    # The endpoint to the kafka server
    uri = localhost:9092
    
    [influxdb]
    # Only needed if Influxdb database is used for backend.
    # The IP address of the InfluxDB service.
    ip_address = localhost
    
    # The port number that the InfluxDB service is listening on.
    port = 8086
    
    # The username to authenticate with.
    user = monasca
    
    # The password to authenticate with.
    password = qydcos
    
    # The name of the InfluxDB database to use.
    database_name = mon
    
    [database]
    url = "mysql+pymysql://monasca:qydcos@127.0.0.1/mon"
    
    
    [keystone_authtoken]
    identity_uri = http://192.168.1.11:35357
    auth_uri = http://192.168.1.11:5000
    admin_password = qydcos
    admin_user = monasca
    admin_tenant_name = service
    cafile =
    certfile =
    keyfile =
    insecure = false
    

    comment out [mysql] section, others keeps on default.
    create monasca system user and log dirs

    useradd monasca -U -r
    mkdir /var/log/monasca
    mkdir /var/log/monasca/api
    chown -R monasca. /var/log/monasca/
    

    on openstack controller node, create monasca user password, assign admin role for user monasca in tenant service.

    openstack user create --domain default --password qydcos monasca 
    openstack role add --project service --user monasca admin
    
    openstack service create --name monasca --description "Monasca monitoring service" monitoring
    
    create endpoint 
    openstack endpoint create --region RegionOne monasca public http://192.168.1.143:8082/v2.0
    openstack endpoint create --region RegionOne monasca internal http://192.168.1.143:8082/v2.0
    openstack endpoint create --region RegionOne monasca admin http://192.168.1.143:8082/v2.0
    

    192.168.1.143 is the floating ip of my api vm address, change it to yours.
    create monasca api startup scripts,vim /etc/init/monasca-api.conf

    # Startup script for the Monasca API
    
    description "Monasca API Python app"
    start on runlevel [2345]
    
    console log
    respawn
    
    setgid monasca
    setuid monasca
    exec /usr/local/bin/gunicorn -n monasca-api -k eventlet --worker-connections=2000 --backlog=1000 --paste /etc/monasca/api-config.ini
    

    start monasca-api service

    service monasca-api start
    

    if you get mysql connection error, modify monasca-common python file, and restart monasca-api service, the python code has bug reading mysql configuration. here is a quick hack,
    vim /usr/local/lib/python2.7/dist-packages/monasca_common/repositories/mysql/mysql_repository.py

                self.conf = cfg.CONF
                #self.database_name = self.conf.mysql.database_name
                #self.database_server = self.conf.mysql.hostname
                #self.database_uid = self.conf.mysql.username
                #self.database_pwd = self.conf.mysql.password
    
                self.database_name = 'mon'
                self.database_server = 'localhost'
                self.database_uid = 'monasca'
                self.database_pwd = 'qydcos'
    
    

    8, install monasca-persister
    the monasca-persister java code has a bug writting data into influxdb, i fixed it and rebuild a jar file and upload it into monasca.git. the monasca-persister python code also has a bug writting data into influxdb, i have no time to fix it.

    copy monasca-persister.jar file into /opt/monasca/
    copy persister-config.yml into /etc/monasca/

    create monasca-persister startup script
    vim /etc/init/monasca-persister.conf

    # Startup script for the Monasca Persister
    
    description "Monasca Persister Python app"
    start on runlevel [2345]
    
    console log
    respawn
    
    setgid monasca
    setuid monasca
    exec /usr/bin/java -Dfile.encoding=UTF-8 -cp /opt/monasca/monasca-persister.jar monasca.persister.PersisterApplication server /etc/monasca/persister-config.yml
    

    start monasca-persister

    service monasca-persister start
    

    9, install monasca-notificatoin

    pip install --upgrade monasca-notification
    apt-get install sendmail
    

    copy notification.yaml into /etc/monasca/
    create startup script, vim /etc/init/monasca-notification.conf

    # Startup script for the monasca_notification
    
    description "Monasca Notification daemon"
    start on runlevel [2345]
    
    console log
    respawn
    
    setgid monasca
    setuid monasca
    exec /usr/bin/python /usr/local/bin/monasca-notification
    

    start notification service

    service monasca-notification start
    

    10, install monasca-thresh
    copy monasca-thresh into /etc/init.d/
    copy monasca-thresh.jar into /opt/monasca-thresh/
    copy thresh-config.yml into /etc/monasca/ and modify host, database to yours.
    start monasca-thresh

    service monasca-thresh start
    

    11, install monasca-agent
    install monasca-agent on openstack controller node, so that it can monitor openstack service process.

    sudo pip install --upgrade monasca-agent
    

    setup monasca-agent, if you are on liberty, change user domain id and project domain id to default, for mitaka, use default domain id,

    monasca-setup -u monasca -p qydcos --user_domain_id e25e0413a70c41449d2ccc2578deb1e4 --project_domain_id e25e0413a70c41449d2ccc2578deb1e4 --user monasca \
     --project_name service -s monitoring --keystone_url http://192.168.1.11:35357/v3 --monasca_url http://192.168.1.143:8082/v2.0 --config_dir /etc/monasca/agent --log_dir /var/log/monasca/agent --overwrite
    

    source admin-rc.sh, run monasca metric-list

  • 128 Comments

    1. hello ,
      monasca is setup all services are working however I think mysql is writting data. alert def are not created, when try to create alert def i am unable to see any notification. i created notification from command however while creating alarm i can not see that notification.
      could you please help me , what could be problem.

    2. hi Shaun,
      I want to install the monasca removing the openstack dependency in it.
      how can i do that ?? any suggestions ??

    3. [2017-05-29 09:09:48 +0000] [6593] [INFO] Starting gunicorn 19.7.1
      [2017-05-29 09:09:48 +0000] [6593] [INFO] Listening at: http://10.244.2.198:8082 (6593)
      [2017-05-29 09:09:48 +0000] [6593] [INFO] Using worker: eventlet
      [2017-05-29 09:09:48 +0000] [6598] [INFO] Booting worker with pid: 6598
      [2017-05-29 09:09:48 +0000] [6598] [ERROR] Exception in worker process
      Traceback (most recent call last):
      File “/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py”, line 578, in spawn_worker
      worker.init_process()
      File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/geventlet.py”, line 102, in init_process
      self.patch()
      File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/geventlet.py”, line 91, in patch
      hubs.use_hub()
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/__init__.py”, line 70, in use_hub
      mod = get_default_hub()
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/__init__.py”, line 38, in get_default_hub
      import eventlet.hubs.epolls
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/epolls.py”, line 27, in
      from eventlet.hubs.hub import BaseHub
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py”, line 23, in
      from eventlet.support import greenlets as greenlet, clear_sys_exc_info, monotonic, six
      File “/usr/local/lib/python2.7/dist-packages/eventlet/support/monotonic.py”, line 167, in
      raise RuntimeError(‘no suitable implementation for this system’)
      RuntimeError: no suitable implementation for this system
      [2017-05-29 09:09:48 +0000] [6598] [INFO] Worker exiting (pid: 6598)
      [2017-05-29 09:09:48 +0000] [6593] [INFO] Shutting down: Master
      [2017-05-29 09:09:48 +0000] [6593] [INFO] Reason: Worker failed to boot.

      I got above error. How can i solve this?

    4. How did you install persistor and threshold components of monasca? You haven’t mentioned those steps in the article & in the beginning influxdb is installed using apt-get but later on at the end you have mentioned again about influxdb installation using pip i.e,
      “pip install influxdb” what is the difference between these two installations of influxdb?

      • step 8 for installing persistor, step 10 for installing threshold. all these files related are here, https://github.com/shaunos/monasca.git, at the beginning of this article, i have already declared it.

        the influx db packages installed by apt-get are official, and it’s influx db service packages. packages installed by pip are influx db python packages, offering python interface for others to operate influx db, required by other python components.

    5. [2017-10-03 06:59:53 +0000] [15942] [DEBUG] Current configuration:
      proxy_protocol: False
      worker_connections: 1000
      statsd_host: None
      max_requests_jitter: 0
      post_fork:
      errorlog: –
      enable_stdio_inheritance: False
      worker_class: sync
      ssl_version: 2
      suppress_ragged_eofs: True
      syslog: False
      syslog_facility: user
      when_ready:
      pre_fork:
      cert_reqs: 0
      preload_app: False
      keepalive: 2
      accesslog: None
      group: 0
      graceful_timeout: 30
      do_handshake_on_connect: False
      spew: False
      workers: 9
      proc_name: monasca-api
      sendfile: None
      pidfile: None
      umask: 0
      on_reload:
      pre_exec:
      worker_tmp_dir: None
      limit_request_fields: 100
      pythonpath: None
      on_exit:
      config: None
      logconfig: None
      check_config: False
      statsd_prefix:
      secure_scheme_headers: {‘X-FORWARDED-PROTOCOL’: ‘ssl’, ‘X-FORWARDED-PROTO’: ‘https’, ‘X-FORWARDED-SSL’: ‘on’}
      reload_engine: auto
      proxy_allow_ips: [‘127.0.0.1’]
      pre_request:
      post_request:
      forwarded_allow_ips: [‘127.0.0.1’]
      worker_int:
      raw_paste_global_conf: []
      threads: 1
      max_requests: 0
      chdir: /root
      daemon: False
      user: 0
      limit_request_line: 4094
      access_log_format: %(h)s %(l)s %(u)s %(t)s “%(r)s” %(s)s %(b)s “%(f)s” “%(a)s”
      certfile: None
      on_starting:
      post_worker_init:
      child_exit:
      worker_exit:
      paste: /etc/monasca/api-config.ini
      default_proc_name: /etc/monasca/api-config.ini
      syslog_addr: udp://localhost:514
      syslog_prefix: None
      ciphers: TLSv1
      worker_abort:
      loglevel: DEBUG
      bind: [‘127.0.0.1:8082’]
      raw_env: []
      initgroups: False
      capture_output: False
      reload: False
      limit_request_field_size: 8190
      nworkers_changed:
      timeout: 30
      keyfile: None
      ca_certs: None
      tmp_upload_dir: None
      backlog: 2048
      logger_class: gunicorn.glogging.Logger
      [2017-10-03 06:59:53 +0000] [15942] [INFO] Starting gunicorn 19.7.1
      [2017-10-03 06:59:53 +0000] [15942] [ERROR] Connection in use: (‘127.0.0.1’, 8082)
      [2017-10-03 06:59:53 +0000] [15942] [DEBUG] connection to (‘127.0.0.1’, 8082) failed: [Errno 98] Address already in use
      [2017-10-03 06:59:53 +0000] [15942] [ERROR] Retrying in 1 second.
      [2017-10-03 06:59:54 +0000] [15942] [ERROR] Connection in use: (‘127.0.0.1’, 8082)
      [2017-10-03 06:59:54 +0000] [15942] [DEBUG] connection to (‘127.0.0.1’, 8082) failed: [Errno 98] Address already in use
      [2017-10-03 06:59:54 +0000] [15942] [ERROR] Retrying in 1 second.
      [2017-10-03 06:59:55 +0000] [15942] [ERROR] Connection in use: (‘127.0.0.1’, 8082)
      [2017-10-03 06:59:55 +0000] [15942] [DEBUG] connection to (‘127.0.0.1’, 8082) failed: [Errno 98] Address already in use
      [2017-10-03 06:59:55 +0000] [15942] [ERROR] Retrying in 1 second.
      [2017-10-03 06:59:56 +0000] [15942] [ERROR] Connection in use: (‘127.0.0.1’, 8082)
      [2017-10-03 06:59:56 +0000] [15942] [DEBUG] connection to (‘127.0.0.1’, 8082) failed: [Errno 98] Address already in use
      [2017-10-03 06:59:56 +0000] [15942] [ERROR] Retrying in 1 second.
      [2017-10-03 06:59:57 +0000] [15942] [ERROR] Connection in use: (‘127.0.0.1’, 8082)
      [2017-10-03 06:59:57 +0000] [15942] [DEBUG] connection to (‘127.0.0.1’, 8082) failed: [Errno 98] Address already in use
      [2017-10-03 06:59:57 +0000] [15942] [ERROR] Retrying in 1 second.
      [2017-10-03 06:59:58 +0000] [15942] [ERROR] Can’t connect to (‘127.0.0.1’, 8082)

      Hi Shaun,
      I’ve got this error, can you please help me to resolve that.
      Thanks,

    6. hello sir , thank you for the installation guide
      i have a problem when i try to start monasca thresh
      the error code is :

      Error: A JNI error has occurred, please check your installation and try again
      Exception in thread “main” java.lang.NoClassDefFoundError: ch/qos/logback/core/Context
      at java.lang.Class.getDeclaredMethods0(Native Method)
      at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
      at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
      at java.lang.Class.getMethod0(Class.java:3018)
      at java.lang.Class.getMethod(Class.java:1784)
      at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
      at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
      Caused by: java.lang.ClassNotFoundException: ch.qos.logback.core.Context
      at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      … 7 more
      i dont know how to solve this , could you help me please ?

      • @Mundo,

        I too had the same issue, I removed monasca_common and monasca_thresh completly from the system. for monasca_common I did pip uninstall monasca_common, for monasca_thresh I removed the dir I had set up. I then removed the git clones for both and started all over again. I ran git clone -b stable/ (for me it was stable/queens) https:///monasca_common and then monasca_thresh. I switched into monasca_common/java and ran the mvn clean install. Then cd .. to go back one and ran python setup.py install. Then I switched over to the monasca_thresh/thresh dir and ran mvn package. Once that is complete there will be a directory monasca_thresh/thresh/target that will include a monasca-thresh-x.x.x-SNAPSHOT-shaded.jar (where x.x.x will be version number). That .jar file will have the logback.xml in the MANIFEST-MF path. Copy that to your monasca_thresh dir, I used /usr/lib/monasca_thresh, as monasca_thresh.jar and the service should start.

        Hope this helps you, as well as others out.

    Comments are closed.