Change: Data Acquisition

From HiveTool
Revision as of 17:01, 18 September 2015 by Paul (talk | contribs) (Created page with "Currently, every 5 minutes cron runs a shell script, hive.sh, that reads the sensors. Some sensors are read once. The program that reads the HX711 reads it 64 times, averages...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Currently, every 5 minutes cron runs a shell script, hive.sh, that reads the sensors. Some sensors are read once. The program that reads the HX711 reads it 64 times, averages the reading, throws away the outliers more than 5% from the average, and then averages the remaining readings again.


A daemon with shared memory would read all the sensors every 5 seconds. There would be 12 samples a minute, or 60 in a five minute interval. These 60 readings (for each sensor) would be stored in a circular FIFO buffer.

Methods would be provided to get the average, last, direction and rate of change. variance, noise figures from the data in the buffer. A filtered average would be calculated based on the method used by the HX711 program of throwing away the outliers. Other noise filters could be implemented.

The buffer would be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer would be dumped and "hyper logging", every 5 seconds, would be activated until the event passed.

This would preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event. Audio and video streams would be similarly buffered and dumped.