Quick Start for Docker & Images

The following guide will walk you through the necessary steps to install, configure, and run the Data Collectors and the Time Series Historian for Docker.

Install Docker and Download Images

Docker Compose

Docker Run

Verify the Containers are Running

Access the UI Pages

Create Logging Session

Verify Your Data

Additional Information

 


Install Docker and Download Images

  • If you don't have Docker installed, download and install it from the Docker website: Docker Installation Guide
  • Visit Docker Hub and pull the Timebase images for Time Series Historian and Collector

Timebase images Docker Hub

 


Docker Compose

The quickest way to get started is to edit and run the following Docker Compose file. It will create and run two containers, one for the Time Series Historian and one for a Collector.

  • Create a "docker-compose.yml" file with the following contents:
version: '1'

name: timebase

services:  
  historian:
    image: timebase/historian:latest
    deploy:
      resources:
        limits:
          cpus: "2"
          memory: 4096m
    hostname: historian
    container_name: historian
    ports:
      - "4511:4511"
    restart: unless-stopped
    volumes:
    - C:\Timebase\Docker:/data

  collector:
    image: timebase/collector:latest
    deploy:
      resources:
        limits:
          cpus: "2"
          memory: 2048m
    hostname: collector
    container_name: collector
    ports:
      - "4521:4521"
    restart: unless-stopped
    volumes:
    - C:\Timebase\Collector\Config:/config
    - C:\Timebase\Collector\Simulator\State:/state
    - C:\Timebase\SF:/sf

  explorer:
    image: timebase/explorer:latest
    deploy:
      resources:
        limits:
          cpus: "2"
          memory: 2048m
    hostname: explorer
    container_name: explorer
    ports:
      - "4531:4531"
    restart: unless-stopped
    volumes:
    - C:Timebase\Explorer:/visuals

This file assumes you are running on a Windows environment and will link to volumes on your Windows host. Note there is one volume for the Historian's data, three volumes for the Collector's configuration and store and forward files, and one volume for Explorer's trend configuration storage. If you are running on a Linux environment, be sure to modify these volume paths.

  • Through a Command Prompt, or Terminal, browse to the location of your "docker-compose.yml" file and run the following command:

docker compose -f docker-compose.yml up --detach

 

Docker Run

  • Alternatively, using Docker Run, you can start your containers individually as follows:

docker run -d --name=historian -p 4511:4511 timebase/historian

docker run -d --name=collector -p 4521:4521 timebase/collector

docker run -d --name=explorer -p 4531:4531 timebase/explorer

 

⚠️ Make sure to mount and persist your data and config files.

These Docker Run commands will not setup volumes external to the containers. If you delete the containers, your data will be lost. Please consult the Docker Documentation to create and mount external volumes as is done by the above Docker Compose method.

 

Verify the Containers are Running

  • To confirm both the Time Series Historian and Collector containers are running, execute the following command:
docker ps
  • You should see the containers named historian, collector, and explorer listed as follows:

Note that a Collector will only connect to one data source (e.g. MQTT Broker). To connect your Time Series Historian to multiple data sources, you can create additional Collector containers and configure them accordingly.

 


Access the UI Pages

Once running, the Time Series Historian and each Collector instance have their own Administration pages:

  • Time Series Historian http://<HistorianAddress>:4511
  • Collector http://<CollectorAddress>:4521
  • Explorer http://<ExplorerAddress>:4531 See Explorer Trending
 

Create Logging Session

By Default, the Collector will "spin up" the Simulator Plugin and Simulate the "Juice Factory" Filling process as a start

 


Verify Your Data

Use Explorer Trending or the Time Series Historian API to access your datasets, tags and data being stored in the Historian

 


Additional Information

 


 

Support Assitance

Struggling? We are here to help and will support you. Please log your ticket and we will respond via email.

Submit your ticket.