Change: Data Acquisition

From HiveTool
Revision as of 04:08, 24 September 2015 by Paul (talk | contribs)
Jump to: navigation, search

Background: Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors. The sensor are usually read by a short c program. Some sensors are read once. On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.

Problems with the current approach:

  1. Difficulty filtering noisy sensors or bad reads.
  2. Up to 5 minutes latency in detecting anomalies such as swarms, hive tampering, and sensor problems.

Proposed change: It is proposed that a daemon read all the sensors and store the readings in a circular FIFO buffer in shared memory. Slow changing signals could be read once every 10 seconds (30 per 5 minute logging interval). Fast changing signals could be read once a second (300 per 5 minute logging interval). Methods will be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer. A filtered average will be calculated based on the method used by the HX711 program of throwing away the outliers. Other noise filters can be implemented.

Every 5 minutes, hive.sh would call some of the methods provided to access data in the buffer and log the filtered agerage and other metrics.

The buffer will also be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer will be saved to a file and a "hyper logging" mode started, where every sample is logged to that file until the event is over. This will preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event. Audio and video streams will be similarly buffered and dumped.