I wanted to see if it was reasonable to use linux/RTAI and comedi to perform realtime data acquisition in our lab.
Here's the setup.
Hardware:
Intel(R) Core(TM)2 Quad CPU Q9650 @ 3.00GHz
INTEL DP45SG motherboard
ATI Radeon HD 4550 rev0
(wireless card present)
[Purchased from System76 "Wild dog" with 8GB ram]
Compiled kernel per the instructions on RTAI site. I should probably have a separate post on this. But it basically followed the "Kbuntu" instructions.
Ran latency test on cpu for 5 or so minutes with kernel compiling 5 threads and running
graphics in chromium browser: load average: 7.56, 5.57, 2.90
It was pretty impressive. I could see the graphics processes grinding to a halt on the machine during the loaded run, but the max latency never went above about 4us (4062ns).
Summary statistics:
lat min| ovl min| lat avg| lat max| ovl max| overruns
max -1462 -1520 -1086 4062 4062 0
min -1523 -1523 -1448 -711 2543 0
avg -1515 -1523 -1244 554 3398 0
stddev 7.6 0.5 63.1 1009.8 383.5 0.0
top - 20:31:39 up 1:33, 7 users, load average: 7.56, 5.57, 2.90
Tasks: 222 total, 8 running, 214 sleeping, 0 stopped, 0 zombie
Cpu0 : 99.0%us, 0.0%sy, 0.0%ni, 0.0%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 99.0%us, 0.0%sy, 0.0%ni, 1.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 75.1%sy, 0.0%ni, 24.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 4161332k total, 3974080k used, 187252k free, 198388k buffers
Swap: 4482092k total, 176k used, 4481916k free, 3078760k cached
top - 20:31:39 up 1:33, 7 users, load average: 7.56, 5.57, 2.90
Tasks: 222 total, 8 running, 214 sleeping, 0 stopped, 0 zombie
Cpu0 : 99.0%us, 0.0%sy, 0.0%ni, 0.0%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 99.0%us, 0.0%sy, 0.0%ni, 1.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 75.1%sy, 0.0%ni, 24.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 4161332k total, 3974080k used, 187252k free, 198388k buffers
Swap: 4482092k total, 176k used, 4481916k free, 3078760k cached
Here's my short python script to take the output of the latency run:
#!/usr/bin/env python
# analyze_latency_log.py
# header format:
# RTH| lat min| ovl min| lat avg| lat max| ovl max| overruns
import numpy as np
f = open('latency-2.6.29.4-rtai371-ni64gb-running-load-cpu3.log').readlines()
fields = [line.split() for line in f]
numberstrs = [farr for farr in fields if len(farr) and farr[0]=='RTD|']
nrow = len(numberstrs)
ncol = 6
data = np.zeros((nrow,ncol))
lno = 0
for line in numberstrs:
line = [ss.replace("|","") for ss in line]
vals = [int(s) for s in line[1:]]
data[lno,:] = np.array(vals)
lno+=1
mx = np.max(data, axis=0)
mi = np.min(data, axis=0)
mm = np.average(data, axis=0)
va = np.sqrt(np.var(data, axis=0))
print """ lat min| ovl min| lat avg| lat max| ovl max| overruns"""
print """max %6.0f %6.0f %6.0f %6.0f %6.0f %6.0f""" % (mx[0],mx[1],mx[2],mx[3],mx[4],mx[5])
print """min %6.0f %6.0f %6.0f %6.0f %6.0f %6.0f""" % (mi[0],mi[1],mi[2],mi[3],mi[4],mi[5])
print """avg %6.0f %6.0f %6.0f %6.0f %6.0f %6.0f""" % (mm[0],mm[1],mm[2],mm[3],mm[4],mm[5])
print """stddev %6.1f %6.1f %6.1f %6.1f %6.1f %6.1f""" % (va[0],va[1],va[2],va[3],va[4],va[5])