Monasca installation and configuration guide

In this tutorial, i will describe how to setup monasca components in detail. Before we start, something needs to be confirmed:

  • All the components of monasca can be installed on one node, such as on openstack controller node, or you can deploy it in multi-nodes. In this tutorial, i will install monasca-api in a new VM created in my openstack cluster, which has a floating ip associated. Monasca-agent was installed on controller node. The agent node post metrics to api node through floating ip. the are in the same sub net.
  • All the user name and password in this tutorial is monasca and qydcos. change it to yours.
  • the installation will be performed on ubuntu 14.04 openstack Mitaka version, for liberty, special settings must be made and be described later.
  • All the file related in this tutorial are here, clone it before you start.

    1, install packages and tools we needed.

    apt-get install -y git
    apt-get install openjdk-7-jre-headless python-pip python-dev
    

    2, install mysql database
    if you install monasca-api in openstack controller node, you can skip installing it, use the msyql already installed for openstack services.

    apt-get install -y mysql-server
    

    create monasca database schema, download mon_mysql here, the schema file in github has a bug, and it can not create notification, i have fixed it here. remember to modify the user name and password in line 234,235 of mon_mysql.sql to yours.

    mysql -uroot -ppassword < mon_mysql.sql
    

    3, install zookeeper
    install zookeeper and restart it. i use localhost interface and only one zookeeper, so the default configuration file needs nothing to be configured.

    apt-get install -y zookeeper zookeeperd zookeeper-bin
    service zookeeper restart
    

    4, install and configure kafka

    wget http://apache.mirrors.tds.net/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz
    mv kafka_2.9.2-0.8.1.1.tgz /opt
    cd /opt
    tar zxf kafka_2.9.2-0.8.1.1.tgz
    ln -s /opt/kafka_2.9.2-0.8.1.1/ /opt/kafka
    ln -s /opt/kafka/config /etc/kafka
    

    create kafka system user, kafka service will started as this user

    useradd kafka -U -r
    

    create kafka startup scripts in /etc/init/kafka.conf, copy following contents into /etc/init/kafka.conf and save it.

    description "Kafka"
    
    start on runlevel [2345]
    stop on runlevel [!2345]
    
    respawn
    
    limit nofile 32768 32768
    
    # If zookeeper is running on this box also give it time to start up properly
    pre-start script
        if [ -e /etc/init.d/zookeeper ]; then
            /etc/init.d/zookeeper restart
        fi
    end script
    
    # Rather than using setuid/setgid sudo is used because the pre-start task must run as root
    exec sudo -Hu kafka -g kafka KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" JMX_PORT=9997 /opt/kafka/bin/kafka-server-start.sh /etc/kafka/server.properties
    

    configure kafka, vim /etc/kafka/server.properties, make sure the following contents is configured

     host.name=localhost
     advertised.host.name=localhost
     log.dirs=/var/kafka
    

    create kafka log dirs.

    mkdir /var/kafka
    mkdir /var/log/kafka
    chown -R kafka. /var/kafka/
    chown -R kafka. /var/log/kafka/
    

    start kafka service

    service kafka start
    

    the next step is to create kafka topics.

    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 64 --topic metrics
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic events
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic raw-events
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transformed-events
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-definitions
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transform-definitions
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-state-transitions
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-notifications
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-notifications
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic retry-notifications
    

    5, install and configure influxdb

    curl -sL https://repos.influxdata.com/influxdb.key | apt-key add -
    echo "deb https://repos.influxdata.com/ubuntu trusty stable" > /etc/apt/sources.list.d/influxdb.list
    apt-get update
    apt-get install -y apt-transport-https
    apt-get install -y influxdb
    
    service influxdb start
    

    create influxdb database, user, password, retention policy, change password to yours.

    influx
    CREATE DATABASE mon
    CREATE USER monasca WITH PASSWORD 'qydcos'
    CREATE RETENTION POLICY persister_all ON mon DURATION 90d REPLICATION 1 DEFAULT
    exit
    

    6, install and configure storm

    wget http://apache.mirrors.tds.net/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz
    mkdir /opt/storm
    cp apache-storm-0.9.6.tar.gz /opt/storm/
    cd /opt/storm/
    tar xzf apache-storm-0.9.6.tar.gz
    ln -s /opt/storm/apache-storm-0.9.6 /opt/storm/current
    
    useradd storm -U -r
    mkdir /var/storm
    mkdir /var/log/storm
    chown -R storm. /var/storm/
    chown -R storm. /var/log/storm/
    

    modify storm.yaml as follow, vim current/storm/conf/storm.yaml

    ### base
    java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib"
    storm.local.dir: "/var/storm"
    
    ### zookeeper.*
    storm.zookeeper.servers:
        - "localhost"
    storm.zookeeper.port:  2181
    storm.zookeeper.retry.interval: 5000
    storm.zookeeper.retry.times: 29
    storm.zookeeper.root: "/storm"
    storm.zookeeper.session.timeout: 30000
    
    ### supervisor.* configs are for node supervisors
    supervisor.slots.ports:
        - 6701
        - 6702
        - 6703
        - 6704
    supervisor.childopts: "-Xmx1024m"
    
    ### worker.* configs are for task workers
    worker.childopts: "-Xmx1280m -XX:+UseConcMarkSweepGC -Dcom.sun.management.jmxremote"
    
    ### nimbus.* configs are for the masteri
    nimbus.host: "localhost"
    nimbus.thrift.port: 6627
    mbus.childopts: "-Xmx1024m"
    
    ### ui.* configs are for the master
    ui.host: 127.0.0.1
    ui.port: 8078
    ui.childopts: "-Xmx768m"
    
    ### drpc.* configs
    
    ### transactional.* configs
    transactional.zookeeper.servers:
        - "localhost"
    transactional.zookeeper.port: 2181
    transactional.zookeeper.root: "/storm-transactional"
    
    ### topology.* configs are for specific executing storms
    topology.acker.executors: 1
    topology.debug: false
    
    logviewer.port: 8077
    logviewer.childopts: "-Xmx128m"
    

    create storm supervisor startup scripts, vim /etc/init/storm-supervisor.conf

    # Startup script for Storm Supervisor
    
    description "Storm Supervisor daemon"
    start on runlevel [2345]
    
    console log
    respawn
    
    kill timeout 240
    respawn limit 25 5
    
    setgid storm
    setuid storm
    chdir /opt/storm/current
    exec /opt/storm/current/bin/storm supervisor
    

    create storm nimbus scripts.vim /etc/init/storm-nimbus.conf

    # Startup script for Storm Nimbus
    
    description "Storm Nimbus daemon"
    start on runlevel [2345]
    
    console log
    respawn
    
    kill timeout 240
    respawn limit 25 5
    
    setgid storm
    setuid storm
    chdir /opt/storm/current
    exec /opt/storm/current/bin/storm nimbus
    

    start storm supervisor and nimbus

    service storm-supervisor start
    service storm-nimbus start
    

    7, install monasca api python packages
    some monasca components have both python and java code available, mainly i choose python code to deploy.

    pip install monasca-common
    pip install gunicorn
    pip install greenlet  # Required for both
    pip install eventlet  # For eventlet workers
    pip install gevent    # For gevent workers
    pip install monasca-api
    pip install influxdb
    

    vim /etc/monasca/api-config.ini , modify host to your ip address

    [DEFAULT]
    name = monasca_api
    
    [pipeline:main]
    # Add validator in the pipeline so the metrics messages can be validated.
    pipeline = auth keystonecontext api
    
    [app:api]
    paste.app_factory = monasca_api.api.server:launch
    
    [filter:auth]
    paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    
    [filter:keystonecontext]
    paste.filter_factory = monasca_api.middleware.keystone_context_filter:filter_factory
    
    [server:main]
    use = egg:gunicorn#main
    host = 192.168.2.23
    port = 8082
    workers = 1
    proc_name = monasca_api
    

    vim /etc/monasca/api-config.conf, modify the following contents

    [DEFAULT]
    # logging, make sure that the user under whom the server runs has permission
    # to write to the directory.
    log_file = monasca-api.log
    log_dir = /var/log/monasca/api/
    debug=False
    region = RegionOne
    [security]
    # The roles that are allowed full access to the API.
    default_authorized_roles = admin, user, domainuser, domainadmin, monasca-user
    
    # The roles that are allowed to only POST metrics to the API. This role would be used by the Monasca Agent.
    agent_authorized_roles = admin
    
    # The roles that are allowed to only GET metrics from the API.
    read_only_authorized_roles = admin
    
    # The roles that are allowed to access the API on behalf of another tenant.
    # For example, a service can POST metrics to another tenant if they are a member of the "delegate" role.
    delegate_authorized_roles = admin
    
    [kafka]
    # The endpoint to the kafka server
    uri = localhost:9092
    
    [influxdb]
    # Only needed if Influxdb database is used for backend.
    # The IP address of the InfluxDB service.
    ip_address = localhost
    
    # The port number that the InfluxDB service is listening on.
    port = 8086
    
    # The username to authenticate with.
    user = monasca
    
    # The password to authenticate with.
    password = qydcos
    
    # The name of the InfluxDB database to use.
    database_name = mon
    
    [database]
    url = "mysql+pymysql://monasca:qydcos@127.0.0.1/mon"
    
    
    [keystone_authtoken]
    identity_uri = http://192.168.1.11:35357
    auth_uri = http://192.168.1.11:5000
    admin_password = qydcos
    admin_user = monasca
    admin_tenant_name = service
    cafile =
    certfile =
    keyfile =
    insecure = false
    

    comment out [mysql] section, others keeps on default.
    create monasca system user and log dirs

    useradd monasca -U -r
    mkdir /var/log/monasca
    mkdir /var/log/monasca/api
    chown -R monasca. /var/log/monasca/
    

    on openstack controller node, create monasca user password, assign admin role for user monasca in tenant service.

    openstack user create --domain default --password qydcos monasca 
    openstack role add --project service --user monasca admin
    
    openstack service create --name monasca --description "Monasca monitoring service" monitoring
    
    create endpoint 
    openstack endpoint create --region RegionOne monasca public http://192.168.1.143:8082/v2.0
    openstack endpoint create --region RegionOne monasca internal http://192.168.1.143:8082/v2.0
    openstack endpoint create --region RegionOne monasca admin http://192.168.1.143:8082/v2.0
    

    192.168.1.143 is the floating ip of my api vm address, change it to yours.
    create monasca api startup scripts,vim /etc/init/monasca-api.conf

    # Startup script for the Monasca API
    
    description "Monasca API Python app"
    start on runlevel [2345]
    
    console log
    respawn
    
    setgid monasca
    setuid monasca
    exec /usr/local/bin/gunicorn -n monasca-api -k eventlet --worker-connections=2000 --backlog=1000 --paste /etc/monasca/api-config.ini
    

    start monasca-api service

    service monasca-api start
    

    if you get mysql connection error, modify monasca-common python file, and restart monasca-api service, the python code has bug reading mysql configuration. here is a quick hack,
    vim /usr/local/lib/python2.7/dist-packages/monasca_common/repositories/mysql/mysql_repository.py

                self.conf = cfg.CONF
                #self.database_name = self.conf.mysql.database_name
                #self.database_server = self.conf.mysql.hostname
                #self.database_uid = self.conf.mysql.username
                #self.database_pwd = self.conf.mysql.password
    
                self.database_name = 'mon'
                self.database_server = 'localhost'
                self.database_uid = 'monasca'
                self.database_pwd = 'qydcos'
    
    

    8, install monasca-persister
    the monasca-persister java code has a bug writting data into influxdb, i fixed it and rebuild a jar file and upload it into monasca.git. the monasca-persister python code also has a bug writting data into influxdb, i have no time to fix it.

    copy monasca-persister.jar file into /opt/monasca/
    copy persister-config.yml into /etc/monasca/

    create monasca-persister startup script
    vim /etc/init/monasca-persister.conf

    # Startup script for the Monasca Persister
    
    description "Monasca Persister Python app"
    start on runlevel [2345]
    
    console log
    respawn
    
    setgid monasca
    setuid monasca
    exec /usr/bin/java -Dfile.encoding=UTF-8 -cp /opt/monasca/monasca-persister.jar monasca.persister.PersisterApplication server /etc/monasca/persister-config.yml
    

    start monasca-persister

    service monasca-persister start
    

    9, install monasca-notificatoin

    pip install --upgrade monasca-notification
    apt-get install sendmail
    

    copy notification.yaml into /etc/monasca/
    create startup script, vim /etc/init/monasca-notification.conf

    # Startup script for the monasca_notification
    
    description "Monasca Notification daemon"
    start on runlevel [2345]
    
    console log
    respawn
    
    setgid monasca
    setuid monasca
    exec /usr/bin/python /usr/local/bin/monasca-notification
    

    start notification service

    service monasca-notification start
    

    10, install monasca-thresh
    copy monasca-thresh into /etc/init.d/
    copy monasca-thresh.jar into /opt/monasca-thresh/
    copy thresh-config.yml into /etc/monasca/ and modify host, database to yours.
    start monasca-thresh

    service monasca-thresh start
    

    11, install monasca-agent
    install monasca-agent on openstack controller node, so that it can monitor openstack service process.

    sudo pip install --upgrade monasca-agent
    

    setup monasca-agent, if you are on liberty, change user domain id and project domain id to default, for mitaka, use default domain id,

    monasca-setup -u monasca -p qydcos --user_domain_id e25e0413a70c41449d2ccc2578deb1e4 --project_domain_id e25e0413a70c41449d2ccc2578deb1e4 --user monasca \
     --project_name service -s monitoring --keystone_url http://192.168.1.11:35357/v3 --monasca_url http://192.168.1.143:8082/v2.0 --config_dir /etc/monasca/agent --log_dir /var/log/monasca/agent --overwrite
    

    source admin-rc.sh, run monasca metric-list

  • 128 Comments

    1. here is my admin-openrc.sh
      export OS_USERNAME=’monasca’
      export OS_PASSWORD=’monasca’
      export OS_TENANT_NAME=’service’
      export OS_ENDPOINT_TYPE=’internalURL’
      export OS_IDENTITY_API_VERSION=’3′
      export OS_USER_DOMAIN_NAME=’Default’
      export OS_PROJECT_DOMAIN_NAME=’Default’
      export OS_AUTH_VERSION=’3′
      export OS_AUTH_STRATEGY=keystone
      export OS_REGION_NAME=’RegionOne’
      export OS_AUTH_URL=http://10.48.221.231:5000/v3/
      export OS_USER_DOMAIN_NAME=’Default’
      export OS_PROJECT_NAME=service

    2. Hi Shaun,

      I see one error in persister log

      ERROR [2017-03-21 05:25:36,703] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-3]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400
      ERROR [2017-03-21 05:36:15,797] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-2]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400
      ERROR [2017-03-21 05:37:36,701] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-3]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400
      ERROR [2017-03-21 05:46:42,697] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-3]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400
      ERROR [2017-03-21 05:49:38,715] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-1]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400
      ERROR [2017-03-21 05:54:11,694] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-2]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400
      ERROR [2017-03-21 06:12:07,727] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-2]: failed to send data to influxdb mon at http://localhost:8086/write?db=mon: 400

      • Are you using monasca-persister python code?
        the python code has bug writing data into influxdb, use java code instead. i have fixed it in java code. recheck step 8

        • I copied your persister-config.yml and updated with my settings.
          I found 1 more file in your github, persister.conf, what is the use of this file? Do i need to copy it somewhere?

    3. INFO [2017-03-21 12:29:21,845] org.eclipse.jetty.setuid.SetUIDListener: Opened application@5fab4353{HTTP/1.1}{0.0.0.0:8090}
      INFO [2017-03-21 12:29:21,845] org.eclipse.jetty.setuid.SetUIDListener: Opened admin@64d12f36{HTTP/1.1}{0.0.0.0:8091}
      INFO [2017-03-21 12:29:21,847] org.eclipse.jetty.server.Server: jetty-9.0.z-SNAPSHOT
      INFO [2017-03-21 12:29:21,858] monasca.persister.consumer.KafkaConsumer: [metric-0]: start
      INFO [2017-03-21 12:29:21,859] monasca.persister.consumer.KafkaConsumer: [metric-1]: start
      INFO [2017-03-21 12:29:21,859] monasca.persister.consumer.KafkaConsumerRunnableBasic: [metric-0]: run
      INFO [2017-03-21 12:29:21,859] monasca.persister.consumer.KafkaConsumer: [metric-2]: start
      INFO [2017-03-21 12:29:21,859] monasca.persister.consumer.KafkaConsumerRunnableBasic: [metric-1]: run
      INFO [2017-03-21 12:29:21,859] monasca.persister.consumer.KafkaConsumer: [metric-3]: start
      INFO [2017-03-21 12:29:21,860] monasca.persister.consumer.KafkaConsumer: [alarm-state-transition-0]: start
      INFO [2017-03-21 12:29:21,861] monasca.persister.consumer.KafkaConsumerRunnableBasic: [metric-2]: run
      INFO [2017-03-21 12:29:21,862] monasca.persister.consumer.KafkaConsumerRunnableBasic: [metric-3]: run
      INFO [2017-03-21 12:29:21,862] monasca.persister.consumer.KafkaConsumer: [alarm-state-transition-1]: start
      INFO [2017-03-21 12:29:21,862] monasca.persister.consumer.KafkaConsumerRunnableBasic: [alarm-state-transition-0]: run
      INFO [2017-03-21 12:29:21,862] monasca.persister.consumer.KafkaConsumerRunnableBasic: [alarm-state-transition-1]: run
      INFO [2017-03-21 12:29:21,948] com.sun.jersey.server.impl.application.WebApplicationImpl: Initiating Jersey application, version ‘Jersey: 1.18.1 02/19/2014
      03:28 AM’
      INFO [2017-03-21 12:29:22,017] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:

      GET /resource (monasca.persister.resource.Resource)

      INFO [2017-03-21 12:29:22,271] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@4b45effd{/,null,AVAILABLE}
      INFO [2017-03-21 12:29:22,272] io.dropwizard.setup.AdminEnvironment: tasks =

      POST /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask)

      INFO [2017-03-21 12:29:22,277] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@5ae82d0d{/,null,AVAILABLE}
      INFO [2017-03-21 12:29:22,294] org.eclipse.jetty.server.ServerConnector: Started application@5fab4353{HTTP/1.1}{0.0.0.0:8090}
      INFO [2017-03-21 12:29:22,295] org.eclipse.jetty.server.ServerConnector: Started admin@64d12f36{HTTP/1.1}{0.0.0.0:8091}
      ERROR [2017-03-21 12:30:39,721] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-2]: failed to send data to influxdb mon at http://localhost
      :8086/write?db=mon: 400
      ERROR [2017-03-21 12:30:39,725] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-2]: http response: {“error”:”unable to parse ‘{\”database\”
      :\”mon\”,\”retentionPolicy\”:\”\”,\”points\”:[{\”measurement\”:\”mem.total_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”R
      egionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:3951.0}
      ,\”precision\”:\”ms\”},{\”measurement\”:\”net.out_errors_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”serv
      ice\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.246Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0}
      ,\”precision\”:\”ms\”},{\”measurement\”:\”disk.inode_used_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”se
      rvice\”:\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307Z\”,\”fields\”:{\”value_
      meta\”:\”{}\”,\”value\”:0.49},\”precision\”:\”ms\”},{\”measurement\”:\”disk.inode_used_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\
      “_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”ubuntu–vg-root\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/\”},\”time\”:\”2017-03-21T
      12:30:31.306Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:6.26},\”precision\”:\”ms\”},{\”measurement\”:\”http_status\”,\”tags\”:{\”component\”:\”monasca-pe
      rsister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”htt
      p://localhost:8091/healthcheck\”},\”time\”:\”2017-03-21T12:30:01.235Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement
      \”:\”cpu.user_time\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalk
      vm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:3629070.93},\”precision\”:\”ms\”},{\”measurement\”:\”process.io.re
      ad_count\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region
      \”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:3
      .35225127E8},\”precision\”:\”ms\”},{\”measurement\”:\”disk.space_used_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Regi
      onOne\”,\”service\”:\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307Z\”,\”fields
      \”:{\”value_meta\”:\”{}\”,\”value\”:33.5},\”precision\”:\”ms\”},{\”measurement\”:\”disk.space_used_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fc
      dedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”ubuntu–vg-root\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/\”},\”time\”:\
      “2017-03-21T12:30:31.306Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:40.4},\”precision\”:\”ms\”},{\”measurement\”:\”metrics-added-to-batch-counter[2]\”,\”
      tags\”:{\”component\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostna
      me\”:\”lvpalkvm207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”prec
      ision\”:\”ms\”},{\”measurement\”:\”zookeeper.node_count\”,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region
      \”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”2017-03-21T12:30:31.379Z\”,\”fields\”:{\”value_me
      ta\”:\”{}\”,\”value\”:204.0},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.connections_count\”,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:\”60
      62fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”2017-03
      -21T12:30:31.379Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:10.0},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.zxid_epoch\”,\”tags\”:{\”component\”
      :\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:
      \”standalone\”},\”time\”:\”2017-03-21T12:30:31.379Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_c
      ount\”,\”tags\”:{\”component\”:\”apache-storm\”,\”process_name\”:\”monasca-thresh\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionO
      ne\”,\”service\”:\”monitoring\”,\”process_user\”:\”storm\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.711Z\”,\”fields\”:{\”value_meta\”:\”{
      }\”,\”value\”:6.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_count\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-pers
      ister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”201
      7-03-21T12:30:31.640Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_count\”,\”tags\”:{\”component\”
      :\”influxd\”,\”process_name\”:\”influxd\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname
      \”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.613Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.p
      id_count\”,\”tags\”:{\”process_name\”:\”org.apache.zookeeper.server\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service
      \”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.577Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1.0},\”precision\”:\”ms\”},{
      \”measurement\”:\”process.pid_count\”,\”tags\”:{\”component\”:\”apache-storm\”,\”process_name\”:\”storm.daemon.supervisor\”,\”_tenant_id\”:\”6062fb6b11a648cf
      8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.695Z\”,\”fields\”:{\”val
      ue_meta\”:\”{}\”,\”value\”:1.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_count\”,\”tags\”:{\”process_name\”:\”rabbitmq\”,\”_tenant_id\”:\”6062fb6
      b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”rabbitmq\”,\”process_user\”:\”rabbitmq\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03
      -21T12:30:31.760Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_count\”,\”tags\”:{\”process_name\”:
      \”kafka.Kafka\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”kafka\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”
      2017-03-21T12:30:31.589Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_count\”,\”tags\”:{\”componen
      t\”:\”apache-storm\”,\”process_name\”:\”storm.daemon.nimbus\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”mon
      itoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.665Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1.0},\”precision\”:\”ms\”},{\”measu
      rement\”:\”process.pid_count\”,\”tags\”:{\”process_name\”:\”epmd\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:
      \”rabbitmq\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.737Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”me
      asurement\”:\”process.pid_count\”,\”tags\”:{\”component\”:\”monasca-agent\”,\”process_name\”:\”monasca-agent\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc4
      79f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”process_user\”:\”mon-agent\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.601Z\”
      ,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.pid_count\”,\”tags\”:{\”component\”:\”monasca-notification
      \”,\”process_name\”:\”monasca-notification\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”proce
      ss_user\”:\”mon-notification\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.627Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precisi
      on\”:\”ms\”},{\”measurement\”:\”cpu.stolen_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monit
      oring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.3},\”precision\”:\”ms\”},{\”measure
      ment\”:\”process.thread_count\”,\”tags\”:{\”component\”:\”apache-storm\”,\”process_name\”:\”monasca-thresh\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479
      f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”process_user\”:\”storm\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.715Z\”,\”fie
      lds\”:{\”value_meta\”:\”{}\”,\”value\”:211.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.thread_count\”,\”tags\”:{\”component\”:\”monasca-persister\”,\
      “process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\
      “lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:94.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.thre
      ad_count\”,\”tags\”:{\”component\”:\”influxd\”,\”process_name\”:\”influxd\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”s
      ervice\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.614Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:16.0},\”precision\”:\
      “ms\”},{\”measurement\”:\”process.thread_count\”,\”tags\”:{\”process_name\”:\”org.apache.zookeeper.server\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f
      \”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.578Z\”,\”fields\”:{\”value_meta\”:\”{}\”,
      \”value\”:23.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.thread_count\”,\”tags\”:{\”process_name\”:\”kafka.Kafka\”,\”_tenant_id\”:\”6062fb6b11a648cf8
      e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”kafka\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.589Z\”,\”fields\”:{\”value_met
      a\”:\”{}\”,\”value\”:48.0},\”precision\”:\”ms\”},{\”measurement\”:\”io.read_kbytes_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_reg
      ion\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307
      Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.idle_time\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e96
      16fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_m
      eta\”:\”{}\”,\”value\”:1.541540446E7},\”precision\”:\”ms\”},{\”measurement\”:\”mem.used_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”
      _region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”va
      lue\”:2063.0},\”precision\”:\”ms\”},{\”measurement\”:\”net.in_packets_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Regio
      nOne\”,\”service\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.246Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\
      “value\”:5.233333333333333},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.wait_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\
      “:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.
      0},\”precision\”:\”ms\”},{\”measurement\”:\”process.mem.rss_mbytes\”,\”tags\”:{\”component\”:\”apache-storm\”,\”process_name\”:\”monasca-thresh\”,\”_tenant_i
      d\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”process_user\”:\”storm\”,\”hostname\”:\”lvpalkvm207\”},\”time
      \”:\”2017-03-21T12:30:31.715Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:990.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.mem.rss_mbytes\”,\”tags\”
      :{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”
      service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:156.0},\”precision\”
      :\”ms\”},{\”measurement\”:\”process.mem.rss_mbytes\”,\”tags\”:{\”component\”:\”influxd\”,\”process_name\”:\”influxd\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616
      fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.614Z\”,\”fields\”:{\”value_met
      a\”:\”{}\”,\”value\”:40.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.mem.rss_mbytes\”,\”tags\”:{\”process_name\”:\”org.apache.zookeeper.server\”,\”_te
      nant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:
      31.578Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:123.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.mem.rss_mbytes\”,\”tags\”:{\”process_name\”:\”k
      afka.Kafka\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”kafka\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”201
      7-03-21T12:30:31.589Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:540.0},\”precision\”:\”ms\”},{\”measurement\”:\”alarm-state-transitions-added-to-batch-co
      unter[0]\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitori
      ng\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\
      “:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.outstanding_bytes\”,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fc
      dedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”2017-03-21T12:30:31.379Z\”,\”
      fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”alarm-state-transitions-added-to-batch-counter[1]\”,\”tags\”:{\”compo
      nent\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm
      207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”}
      ,{\”measurement\”:\”metrics-added-to-batch-counter[0]\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_
      region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z
      \”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.io.read_kbytes\”,\”tags\”:{\”component\”:\”monasca-persi
      ster\”,\”process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”host
      name\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:15852.0},\”precision\”:\”ms\”},{\”measurement\”:\”c
      pu.percent\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”}
      ,\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:19.1},\”precision\”:\”ms\”},{\”measurement\”:\”mem.swap_free_perc\”,\”tags
      \”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03
      -21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:92.8},\”precision\”:\”ms\”},{\”measurement\”:\”load.avg_5_min\”,\”tags\”:{\”_tenant_id\”:\”60
      62fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.340Z\”,\”f
      ields\”:{\”value_meta\”:\”{}\”,\”value\”:0.319},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.system_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616f
      cdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta
      \”:\”{}\”,\”value\”:4.8},\”precision\”:\”ms\”},{\”measurement\”:\”mem.used_buffers\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\
      “:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:12
      3.0},\”precision\”:\”ms\”},{\”measurement\”:\”net.out_bytes_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.242Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:7
      71.8},\”precision\”:\”ms\”},{\”measurement\”:\”io.read_req_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”se
      rvice\”:\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307Z\”,\”fields\”:{\”value_
      meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”monasca.thread_count\”,\”tags\”:{\”component\”:\”monasca-agent\”,\”_tenant_id\”:\”6062f
      b6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.828Z\”,\”fiel
      ds\”:{\”value_meta\”:\”{}\”,\”value\”:13.0},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.wait_time\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc4
      79f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{
      }\”,\”value\”:8232.56},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.user_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”R
      egionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:14.0},\
      “precision\”:\”ms\”},{\”measurement\”:\”zookeeper.zxid_count\”,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_r
      egion\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”2017-03-21T12:30:31.379Z\”,\”fields\”:{\”val
      ue_meta\”:\”{}\”,\”value\”:2868138.0},\”precision\”:\”ms\”},{\”measurement\”:\”net.in_errors_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479
      f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.246Z\”,\”fields\”:{
      \”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”io.read_time_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\
      “,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:3
      0:31.307Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”net.in_packets_dropped_sec\”,\”tags\”:{\”_tenant_id\”:\
      “6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03
      -21T12:30:31.246Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.2},\”precision\”:\”ms\”},{\”measurement\”:\”mem.swap_total_mb\”,\”tags\”:{\”_tenant_id\”:\”
      6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\
      “fields\”:{\”value_meta\”:\”{}\”,\”value\”:4095.0},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.avg_latency_sec\”,\”tags\”:{\”component\”:\”zookeeper\”
      ,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”
      },\”time\”:\”2017-03-21T12:30:31.378Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.min_latency_sec\”
      ,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\
      “lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”2017-03-21T12:30:31.378Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measu
      rement\”:\”mem.usable_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”
      lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1665.0},\”precision\”:\”ms\”},{\”measurement\”:\”jvm.memory.
      total.max\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitor
      ing\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value
      \”:1.14556928E9},\”precision\”:\”ms\”},{\”measurement\”:\”mem.used_shared\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Regio
      nOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”prec
      ision\”:\”ms\”},{\”measurement\”:\”process.open_file_descriptors\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-persister\”,\”_te
      nant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30
      :31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:125.0},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.frequency_mhz\”,\”tags\”:{\”_tenant_id\”:\”6062fb6
      b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields
      \”:{\”value_meta\”:\”{}\”,\”value\”:2933.436},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.idle_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcded
      c479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\
      “{}\”,\”value\”:80.9},\”precision\”:\”ms\”},{\”measurement\”:\”net.in_bytes_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\
      “RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.242Z\”,\”fields\”:{\”value_meta\”:\”
      {}\”,\”value\”:458.73333333333335},\”precision\”:\”ms\”},{\”measurement\”:\”metrics-added-to-batch-counter[3]\”,\”tags\”:{\”component\”:\”monasca-persister\”
      ,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”http://local
      host:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.
      io.write_count\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_
      region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”val
      ue\”:1.00193079E8},\”precision\”:\”ms\”},{\”measurement\”:\”mem.used_real_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Re
      gionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:813.0},\
      “precision\”:\”ms\”},{\”measurement\”:\”mem.used_cache\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:
      \”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1125.0},\”precision\”:\”ms\”},
      {\”measurement\”:\”load.avg_15_min\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hos
      tname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.340Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.302},\”precision\”:\”ms\”},{\”measurement\”:\”mo
      nasca.collection_time_sec\”,\”tags\”:{\”component\”:\”monasca-agent\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service
      \”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.811Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.6153960227966309},\”preci
      sion\”:\”ms\”},{\”measurement\”:\”kafka.consumer_lag\”,\”tags\”:{\”topic\”:\”metrics\”,\”consumer_group\”:\”thresh-metric\”,\”component\”:\”kafka\”,\”_tenant
      _id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”kafka\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.466Z\
      “,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”kafka.consumer_lag\”,\”tags\”:{\”topic\”:\”events\”,\”consumer_gr
      oup\”:\”thresh-event\”,\”component\”:\”kafka\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”kafka\”,\”hostname
      \”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.479Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”load.avg_
      1_min\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”ti
      me\”:\”2017-03-21T12:30:31.340Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.405},\”precision\”:\”ms\”},{\”measurement\”:\”jvm.memory.total.used\”,\”tags\
      “:{\”component\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-03-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:5.5117464E7},\”p
      recision\”:\”ms\”},{\”measurement\”:\”zookeeper.max_latency_sec\”,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\
      “_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”2017-03-21T12:30:31.378Z\”,\”fields\”:{\”
      value_meta\”:\”{}\”,\”value\”:2.78},\”precision\”:\”ms\”},{\”measurement\”:\”mem.swap_free_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”
      ,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\
      “value\”:3800.0},\”precision\”:\”ms\”},{\”measurement\”:\”mem.usable_perc\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Regio
      nOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:42.1},\”pre
      cision\”:\”ms\”},{\”measurement\”:\”net.out_packets_dropped_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”s
      ervice\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.246Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0
      .0},\”precision\”:\”ms\”},{\”measurement\”:\”cpu.system_time\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”serv
      ice\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.406Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:609261.9500000001},\”pre
      cision\”:\”ms\”},{\”measurement\”:\”io.write_kbytes_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”
      :\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307Z\”,\”fields\”:{\”value_meta\”:
      \”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”mem.free_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Regi
      onOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:639.0},\”p
      recision\”:\”ms\”},{\”measurement\”:\”metrics-added-to-batch-counter[1]\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e96
      16fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”,\”url\”:\”http://localhost:8091/metrics\”},\”time\”:\”2017-0
      3-21T12:30:01.334Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”net.out_packets_sec\”,\”tags\”:{\”_tenant_id\”
      :\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”device\”:\”eth0\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-
      03-21T12:30:31.246Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1.1},\”precision\”:\”ms\”},{\”measurement\”:\”process.cpu_perc\”,\”tags\”:{\”component\”:\”
      apache-storm\”,\”process_name\”:\”monasca-thresh\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\
      “process_user\”:\”storm\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.715Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.8},\”precision\”:
      \”ms\”},{\”measurement\”:\”process.cpu_perc\”,\”tags\”:{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b1
      1a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”
      :{\”value_meta\”:\”{}\”,\”value\”:2.4},\”precision\”:\”ms\”},{\”measurement\”:\”process.cpu_perc\”,\”tags\”:{\”component\”:\”influxd\”,\”process_name\”:\”inf
      luxd\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017
      -03-21T12:30:31.614Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.3},\”precision\”:\”ms\”},{\”measurement\”:\”process.cpu_perc\”,\”tags\”:{\”process_name\
      “:\”org.apache.zookeeper.server\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpa
      lkvm207\”},\”time\”:\”2017-03-21T12:30:31.578Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:0.3},\”precision\”:\”ms\”},{\”measurement\”:\”process.cpu_perc\”
      ,\”tags\”:{\”process_name\”:\”kafka.Kafka\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”kafka\”,\”hostname\”:
      \”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.589Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:2.4},\”precision\”:\”ms\”},{\”measurement\”:\”mem.swap_use
      d_mb\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”tim
      e\”:\”2017-03-21T12:30:31.274Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:295.0},\”precision\”:\”ms\”},{\”measurement\”:\”process.io.write_kbytes\”,\”tags
      \”:{\”component\”:\”monasca-persister\”,\”process_name\”:\”monasca-persister\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,
      \”service\”:\”monitoring\”,\”hostname\”:\”lvpalkvm207\”},\”time\”:\”2017-03-21T12:30:31.641Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:1562828.0},\”preci
      sion\”:\”ms\”},{\”measurement\”:\”io.write_req_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”mo
      nitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307Z\”,\”fields\”:{\”value_meta\”:\”{}\
      “,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”io.write_time_sec\”,\”tags\”:{\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”Reg
      ionOne\”,\”service\”:\”monitoring\”,\”device\”:\”vda1\”,\”hostname\”:\”lvpalkvm207\”,\”mount_point\”:\”/boot\”},\”time\”:\”2017-03-21T12:30:31.307Z\”,\”field
      s\”:{\”value_meta\”:\”{}\”,\”value\”:0.0},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.in_bytes\”,\”tags\”:{\”component\”:\”zookeeper\”,\”_tenant_id\”:
      \”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,\”mode\”:\”standalone\”},\”time\”:\”201
      7-03-21T12:30:31.378Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:2.447035E7},\”precision\”:\”ms\”},{\”measurement\”:\”zookeeper.out_bytes\”,\”tags\”:{\”co
      mponent\”:\”zookeeper\”,\”_tenant_id\”:\”6062fb6b11a648cf8e9616fcdedc479f\”,\”_region\”:\”RegionOne\”,\”service\”:\”zookeeper\”,\”hostname\”:\”lvpalkvm207\”,
      \”mode\”:\”standalone\”},\”time\”:\”2017-03-21T12:30:31.378Z\”,\”fields\”:{\”value_meta\”:\”{}\”,\”value\”:2.4537847E7},\”precision\”:\”ms\”}],\”tags\”:{}}’:
      missing tag value”}

      INFO [2017-03-21 12:30:41,920] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: consumer.id = 1_metric-0
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: socket.timeout.ms = 30000
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: zookeeper.session.timeout.ms = 60000
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: refresh.leader.backoff.ms = 200
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: fetch.wait.max.ms = 100
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: consumer.timeout.ms = 1000
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: fetch.message.max.bytes = 1048576
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: group.id = 1_metrics
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: auto.offset.reset = largest
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: zookeeper.sync.time.ms = 2000
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: socket.receive.buffer.bytes = 65536
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: fetch.min.bytes = 1
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: client.id = 1_metric-0
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: rebalance.max.retries = 4
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: queued.max.message.chunks = 10
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: auto.commit.enable = false
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: rebalance.backoff.ms = 2000
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: zookeeper.connection.timeout.ms = 60000
      INFO [2017-03-21 12:30:41,923] monasca.persister.consumer.KafkaChannel: [metric-0]: Kafka configuration: zookeeper.connect = localhost:2181
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:host.name=lvpalkvm208.pal.sap.corp
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.7.0_121
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/monasca/monasca-persister.jar
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/j
      ni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:os.version=4.2.0-42-generic
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:user.name=monasca
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/monasca
      INFO [2017-03-21 12:30:42,171] org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/
      INFO [2017-03-21 12:30:42,172] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.I
      0Itec.zkclient.ZkClient@4c08da6f
      INFO [2017-03-21 12:30:42,192] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authentica
      te using SASL (unknown error)
      INFO [2017-03-21 12:30:42,200] org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
      INFO [2017-03-21 12:30:42,208] org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15aeaedb1b7
      67a0, negotiated timeout = 40000
      INFO [2017-03-21 12:30:42,544] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: consumer.id = 1_metric-1
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: socket.timeout.ms = 30000
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: zookeeper.session.timeout.ms = 60000
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: refresh.leader.backoff.ms = 200
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: fetch.wait.max.ms = 100
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: consumer.timeout.ms = 1000
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: fetch.message.max.bytes = 1048576
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: group.id = 1_metrics
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: auto.offset.reset = largest
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: zookeeper.sync.time.ms = 2000
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: socket.receive.buffer.bytes = 65536
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: fetch.min.bytes = 1
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: client.id = 1_metric-1
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: rebalance.max.retries = 4
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: queued.max.message.chunks = 10
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: auto.commit.enable = false
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: rebalance.backoff.ms = 2000
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: zookeeper.connection.timeout.ms = 60000
      INFO [2017-03-21 12:30:42,545] monasca.persister.consumer.KafkaChannel: [metric-1]: Kafka configuration: zookeeper.connect = localhost:2181
      INFO [2017-03-21 12:30:42,549] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.I
      0Itec.zkclient.ZkClient@6ada79a9
      INFO [2017-03-21 12:30:42,551] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to auth
      enticate using SASL (unknown error)
      INFO [2017-03-21 12:30:42,551] org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
      INFO [2017-03-21 12:30:42,554] org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15aea
      edb1b767a1, negotiated timeout = 40000
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: consumer.id = 1_metric-2
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: socket.timeout.ms = 30000
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: zookeeper.session.timeout.ms = 60000
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: refresh.leader.backoff.ms = 200
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: fetch.wait.max.ms = 100
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: consumer.timeout.ms = 1000
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: fetch.message.max.bytes = 1048576
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: group.id = 1_metrics
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: auto.offset.reset = largest
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: zookeeper.sync.time.ms = 2000
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: socket.receive.buffer.bytes = 65536
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: fetch.min.bytes = 1
      INFO [2017-03-21 12:30:42,560] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: client.id = 1_metric-2
      INFO [2017-03-21 12:30:42,561] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: rebalance.max.retries = 4
      INFO [2017-03-21 12:30:42,561] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: queued.max.message.chunks = 10
      INFO [2017-03-21 12:30:42,561] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: auto.commit.enable = false
      INFO [2017-03-21 12:30:42,561] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: rebalance.backoff.ms = 2000
      INFO [2017-03-21 12:30:42,561] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: zookeeper.connection.timeout.ms = 60000
      INFO [2017-03-21 12:30:42,561] monasca.persister.consumer.KafkaChannel: [metric-2]: Kafka configuration: zookeeper.connect = localhost:2181
      INFO [2017-03-21 12:30:42,567] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.I
      0Itec.zkclient.ZkClient@33264613
      INFO [2017-03-21 12:30:42,568] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authentica
      te using SASL (unknown error)
      INFO [2017-03-21 12:30:42,568] org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
      INFO [2017-03-21 12:30:42,572] org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15aeaedb1b7
      67a2, negotiated timeout = 40000
      INFO [2017-03-21 12:30:42,576] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: consumer.id = 1_metric-3
      INFO [2017-03-21 12:30:42,576] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: socket.timeout.ms = 30000
      INFO [2017-03-21 12:30:42,576] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: zookeeper.session.timeout.ms = 60000
      INFO [2017-03-21 12:30:42,576] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: refresh.leader.backoff.ms = 200
      INFO [2017-03-21 12:30:42,576] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: fetch.wait.max.ms = 100
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: consumer.timeout.ms = 1000
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: fetch.message.max.bytes = 1048576
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: group.id = 1_metrics
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: auto.offset.reset = largest
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: zookeeper.sync.time.ms = 2000
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: socket.receive.buffer.bytes = 65536
      INFO [2017-03-21 12:30:42,577] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: fetch.min.bytes = 1
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: client.id = 1_metric-3
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: rebalance.max.retries = 4
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: queued.max.message.chunks = 10
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: auto.commit.enable = false
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: rebalance.backoff.ms = 2000
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: zookeeper.connection.timeout.ms = 60000
      INFO [2017-03-21 12:30:42,578] monasca.persister.consumer.KafkaChannel: [metric-3]: Kafka configuration: zookeeper.connect = localhost:2181
      INFO [2017-03-21 12:30:42,582] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.I
      0Itec.zkclient.ZkClient@35f0ade4
      INFO [2017-03-21 12:30:42,583] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to auth
      enticate using SASL (unknown error)
      INFO [2017-03-21 12:30:42,584] org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
      INFO [2017-03-21 12:30:42,586] org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15aea
      edb1b767a3, negotiated timeout = 40000
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: consumer.id = 1_alarm-state-transit
      ion-0
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: socket.timeout.ms = 30000
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: zookeeper.session.timeout.ms = 6000
      0
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: refresh.leader.backoff.ms = 200
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: fetch.wait.max.ms = 100
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: consumer.timeout.ms = 1000
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: fetch.message.max.bytes = 1048576
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: group.id = 1_alarm-state-transition
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: auto.offset.reset = largest
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: zookeeper.sync.time.ms = 2000
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: socket.receive.buffer.bytes = 65536
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: fetch.min.bytes = 1
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: client.id = 1_alarm-state-transitio
      n-0
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: rebalance.max.retries = 4
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: queued.max.message.chunks = 10
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: auto.commit.enable = false
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: rebalance.backoff.ms = 2000
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: zookeeper.connection.timeout.ms = 6
      0000
      INFO [2017-03-21 12:30:42,591] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-0]: Kafka configuration: zookeeper.connect = localhost:2181
      INFO [2017-03-21 12:30:42,596] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.I
      0Itec.zkclient.ZkClient@4384d01f
      INFO [2017-03-21 12:30:42,597] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to auth
      enticate using SASL (unknown error)
      INFO [2017-03-21 12:30:42,597] org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
      INFO [2017-03-21 12:30:42,600] org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15aea
      edb1b767a4, negotiated timeout = 40000
      INFO [2017-03-21 12:30:42,603] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: consumer.id = 1_alarm-state-transit
      ion-1
      INFO [2017-03-21 12:30:42,603] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: socket.timeout.ms = 30000
      INFO [2017-03-21 12:30:42,603] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: zookeeper.session.timeout.ms = 6000
      0
      INFO [2017-03-21 12:30:42,603] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: refresh.leader.backoff.ms = 200
      INFO [2017-03-21 12:30:42,603] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: fetch.wait.max.ms = 100
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: consumer.timeout.ms = 1000
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: fetch.message.max.bytes = 1048576
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: group.id = 1_alarm-state-transition
      s
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: auto.offset.reset = largest
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: zookeeper.sync.time.ms = 2000
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: socket.receive.buffer.bytes = 65536
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: fetch.min.bytes = 1
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: client.id = 1_alarm-state-transitio
      n-1
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: rebalance.max.retries = 4
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: queued.max.message.chunks = 10
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: auto.commit.enable = false
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: rebalance.backoff.ms = 2000
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: zookeeper.connection.timeout.ms = 6
      0000
      INFO [2017-03-21 12:30:42,604] monasca.persister.consumer.KafkaChannel: [alarm-state-transition-1]: Kafka configuration: zookeeper.connect = localhost:2181
      INFO [2017-03-21 12:30:42,608] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.I
      0Itec.zkclient.ZkClient@6ca21395
      INFO [2017-03-21 12:30:42,609] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authentica
      te using SASL (unknown error)
      INFO [2017-03-21 12:30:42,610] org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
      INFO [2017-03-21 12:30:42,612] org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15aeaedb1b7
      67a5, negotiated timeout = 40000
      INFO [2017-03-21 12:30:42,623] io.dropwizard.server.ServerFactory: Starting monasca-persister

      • Failed to send data to influxdb mon at http://localhost
        :8086/write?db=mon: 400
        ERROR [2017-03-21 12:30:39,725] monasca.persister.repository.influxdb.InfluxV9RepoWriter: [metric-2]: http response: {“error”:”unable to parse ‘{\”database\”

        checkout influxdb, is it listening on port 8086 and binding on interface localhost?
        username, password, and db name, if all of those are correct, then try to upgrade influxdb to a newer version.

    4. Hello, Shaun
      Please, help me.

      I modified
      in the File “/usr/local/lib/python2.7/dist-packages/monasca_api/api/server.py”,

      to

      dimension_names = simport.load(‘monasca_api.v2.reference.metrics:DimensionValues’)()

      I had this error.

      /usr/bin/python /usr/local/bin/gunicorn -n monasca-api -k eventlet –worker-connections=2000 –backlog=1000 –paste /etc/monasca/api-config.ini
      [2017-04-13 18:56:53 +0000] [15900] [INFO] Starting gunicorn 19.7.1
      [2017-04-13 18:56:53 +0000] [15900] [INFO] Listening at: http://127.0.0.1:8000 (15900)
      [2017-04-13 18:56:53 +0000] [15900] [INFO] Using worker: eventlet
      [2017-04-13 18:56:53 +0000] [15905] [INFO] Booting worker with pid: 15905
      [2017-04-13 18:56:53 +0000] [15905] [ERROR] Exception in worker process
      Traceback (most recent call last):
      File “/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py”, line 578, in spawn_worker
      worker.init_process()
      File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/geventlet.py”, line 102, in init_process
      self.patch()
      File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/geventlet.py”, line 91, in patch
      hubs.use_hub()
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/__init__.py”, line 70, in use_hub
      mod = get_default_hub()
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/__init__.py”, line 38, in get_default_hub
      import eventlet.hubs.epolls
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/epolls.py”, line 27, in
      from eventlet.hubs.hub import BaseHub
      File “/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py”, line 23, in
      from eventlet.support import greenlets as greenlet, clear_sys_exc_info, monotonic, six
      File “/usr/local/lib/python2.7/dist-packages/eventlet/support/monotonic.py”, line 167, in
      raise RuntimeError(‘no suitable implementation for this system’)
      RuntimeError: no suitable implementation for this system
      [2017-04-13 18:56:53 +0000] [15905] [INFO] Worker exiting (pid: 15905)
      [2017-04-13 18:56:53 +0000] [15900] [INFO] Shutting down: Master
      [2017-04-13 18:56:53 +0000] [15900] [INFO] Reason: Worker failed to boot.

        • Solved upper errors.
          New error occured.

          root@24ceimo:/etc/monasca# /usr/local/bin/gunicorn -n monasca-api -k eventlet –worker-connections=2000 –backlog=1000 –paste /etc/monasca/api-config.ini
          [2017-04-14 14:04:28 +0000] [147939] [INFO] Starting gunicorn 19.7.1
          [2017-04-14 14:04:28 +0000] [147939] [INFO] Listening at: http://180.210.14.240:8082 (147939)
          [2017-04-14 14:04:28 +0000] [147939] [INFO] Using worker: eventlet
          [2017-04-14 14:04:28 +0000] [147944] [INFO] Booting worker with pid: 147944
          [2017-04-14 14:04:29 +0000] [147944] [ERROR] Exception in worker process
          Traceback (most recent call last):
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py”, line 578, in spawn_worker
          worker.init_process()
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/geventlet.py”, line 103, in init_process
          super(EventletWorker, self).init_process()
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py”, line 126, in init_process
          self.load_wsgi()
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py”, line 135, in load_wsgi
          self.wsgi = self.app.wsgi()
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py”, line 67, in wsgi
          self.callable = self.load()
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py”, line 63, in load
          return self.load_pasteapp()
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py”, line 59, in load_pasteapp
          return load_pasteapp(self.cfgurl, self.relpath, global_conf=self.cfg.paste_global_conf)
          File “/usr/local/lib/python2.7/dist-packages/gunicorn/app/pasterapp.py”, line 69, in load_pasteapp
          global_conf=global_conf)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 247, in loadapp
          return loadobj(APP, uri, name=name, **kw)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 271, in loadobj
          global_conf=global_conf)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 296, in loadcontext
          global_conf=global_conf)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 320, in _loadconfig
          return loader.get_context(object_type, name, global_conf)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 450, in get_context
          global_additions=global_additions)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 562, in _pipeline_app_context
          for name in pipeline[:-1]]
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 458, in get_context
          section)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 517, in _context_from_explicit
          value = import_string(found_expr)
          File “/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py”, line 22, in import_string
          return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
          File “/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py”, line 2302, in load
          return self.resolve()
          File “/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py”, line 2308, in resolve
          module = __import__(self.module_name, fromlist=[‘__name__’], level=0)
          ImportError: No module named middleware.keystone_context_filter
          [2017-04-14 14:04:29 +0000] [147944] [INFO] Worker exiting (pid: 147944)
          [2017-04-14 14:04:29 +0000] [147939] [INFO] Shutting down: Master
          [2017-04-14 14:04:29 +0000] [147939] [INFO] Reason: Worker failed to boot.

    5. Need final help(I think).
      Can you show me the example of the command “setup monasca-agent,~~” for mitaka?

    Comments are closed.