Monday, January 29, 2018

moving citrix windows on Windows

When using Nihon Kohden NeuroWorkbench via Citrix, I occasionally lose the ability to see the video display window.

This seems to be due to using it on different display configurations: sometimes I use it on on a dual display computer sometimes on a single display computer. The video window appears to be displaying "off screen"

I wanted to find a way to fix this. It looks like this is possible. For example, from python, I can find all the windows with "Video" in the title and move them to the (0,0) position with code like this:


from pprint import pprint

import win32gui
import win32con # constants

VERBOSE = 1
windowlist = []
global videowinlist

def callback(hwnd, extra):
    rect = win32gui.GetWindowRect(hwnd)
    x = rect[0]
    y = rect[1]
    w = rect[2] - x
    h = rect[3] - y
    if VERBOSE > 1:
        print("Window %s:" % win32gui.GetWindowText(hwnd))
        print("\tLocation: (%d, %d)" % (x, y))
        print("\t    Size: (%d, %d)" % (w, h))
    windowlist.append({'text':win32gui.GetWindowText(hwnd),
                       'hwnd':hwnd,
                       'xy': (x,y),
                       'w': w,
                       'h': h})
                                                     
win32gui.EnumWindows(callback, None)

if VERBOSE >=2:
    print("global list of windows in 'windowlist'")
    pprint(windowlist)
    
videowinlist = [win for win in windowlist if win['text'].find('Video') >= 0]
if VERBOSE > 0:
    print("video windows")
    pprint(videowinlist)

for win in videowinlist:
    r=win32gui.SendMessage(win['hwnd'], win32con.WM_ENTERSIZEMOVE)
    if VERBOSE: 
        print("result of SendMessage")
        pprint(r)
    win32gui.SetWindowPos(win['hwnd'], 0, 0,0,win['w'], win['h'],
                          win32con.SWP_NOOWNERZORDER | win32con.SWP_SHOWWINDOW)

Tested so far on Windows 10, but should work on Windows 7 and other versions

Saturday, August 12, 2017

jupyter notebook cells in markdown

I would like to be able to write jupyter notebooks and other code in plain text. There are many ways to do this. "no web" with Pweave supports some nice formats. It looks like spyder-reports adds support for this as well. Markdown is a much favored report format, I think mostly because there is a lot of support for it now on github and across the web. 

Trying out an idea for allowing jupyter notebook cells to be written in markdown:

Abuse the link reference notation in markdown to store json metadata inside the parenthesis in the link reference like so:

[//]: # ({"cell_type": "markdown", "metadata":{"slideshow": {"slide_type": "slide"}}})
# test notebook 1
This notebook will have metadata attached to cells
[//]: # ({"cell_type": "markdown", "metadata":{"slideshow": {"slide_type": "skip"}, "tags": ["tag_example1"]}})
## second cell
- skip this slide
- it has a tag attached
[//]: # ({"cell_type": "code", "metadata":{"slideshow": {"slide_type": "slide"}}})
```{.python .input  n=1}
# first code cell, also a slide
print("hello first code cell")
print('seconde line of first code cell')
```
```{.json .output n=1}
[
 {
  "name": "stdout",
  "output_type": "stream",
  "text": "hello first code cell\nseconde line of first code cell\n"
 }
]
```
[//]: # ({"cell_type": "code", "metadata":{"slideshow": {"slide_type": "fragment"}}})
```{.python .input  n=2}
# second code cell, slide fragment
print("second code cell")
```
```{.json .output n=2}
[
 {
  "name": "stdout",
  "output_type": "stream",
  "text": "second code cell\n"
 }
]
```
[//]: # ({"cell_type": "markdown"})
#### another markdown cell
- item this has no metadata so may not getting
optional metadata header
- item
- item
[//]: # ({"cell_type": "code", "metadata":{"collapsed": true}})
```{.python .input}
```

Saturday, September 8, 2012


Book Review: Programming Computer Vision with Python

by Jan Erik Solem
published by O'Reilly, 2012


Computer vision is a fascinating subject and in recent years it has gone from being an academic pursuit to a practical everyday technology. Anyone who does Google searches or uses a smartphone likely makes use of computer vision algorithms on a regular basis, perhaps without even knowing it.  Computer vision plays key roles in a broad range of fields from law enforcement, manufacturing, biology, and medicine to social media and gaming.

I was excited to read Jan Erik Solem's new book, Programming Computer Vision with Python, because it combines a practical survey of many of these mature technologies with the clarity and ease of use of my favorite programming language, Python.

I have a background in computer vision and I wanted to learn more about topics like multi-view geometry methods, so, for my purposes, Solem's book was a dream come true. The first five chapters lead you through a series of important mathematical and software tools which make multi-view 3D reconstructions a natural and practical application. I was doing it myself by the end of chapter five. The example code is clear and from the author's website and via github.
I had a great time reading the book and going through the programming exercises. I can recommend the book strongly. I just wish I could figure out to whom to recommend it!

The practical step by step approach that Solem uses allowed me to dig into the math behind the algorithms while being able to play with working code. This is a great way to learn. So I can imagine that for a college or graduate student, in the right sort of course, the book would be invaluable. It requires a little linear algebra, geometry and familiarity with vector spaces. I can imagine another audience for this book: the smart, ambitious programmer who wants to use computer vision as part of his or her cool-new-product. The book also covers classifying and searching images using various approaches, along with image segmentation techniques and an introduction to the OpenCV library for speed in a realtime object tracking application.. It even discusses building web applications which make use of these techniques.

The book is definitely not a stand-alone textbook; it leaves out a the sort of wider perspective that a course or textbook would provide. This isn't a criticism per se, but I think the book would have benefited from short asides of the sort used in some books which highlight the context and the motivations behind the algorithms.

For example, one of the fundamental problems in computer vision is the correspondence problem: how do you know if one point on a object in an image corresponds to the same point in another image taken from a different position in space or time. Years of work and hundreds or possibly thousands of doctoral theses have devoted themselves to different ways of attempting to solve this problem. It is one of the fundamental (so-called) ill-posed inverse problems of vision. (It's called “ill-posed” because there is not sufficient information in the images to generate a unique solution.)  In chapter 2, Solem introduces a series of local feature detectors, culminating with the Scale-Invariant Feature Transform (SIFT). The presentation is so matter-of-fact that you wouldn't know that SIFT and similar algorithms are a major advance forward in solving this fundamental problem.

So far, I haven't mentioned the other strength of the book, which is Python itself and its surrounding scientific programming ecology. For years, Python been one of the clearest and most re-usable of interactive programming languages. With the evolution of tools and libraries such scipy, IPython, sage, and the scikits, we have entered a golden age for doing numerical work in Python.

It was a pleasure to be able to read through the book with an IPython notebook open, so that I could interact with the code as I read. The notebook format left me with beautiful, publication quality graphs and images and typeset mathematics based upon the book’s examples, and recorded my work on exercises and my own experiments. I wouldn't be surprised if, in the future, similar books include IPython notebooks as part of their teaching materials.

What is left out of the Solem's discussion is some of the other major packages for computer vision and machine learning. There’s scikit-learn, scikit-image, Luis Coelho's Mahotas, and the UC Davis Cell Profiler library as well as many other libraries from research groups which develop primarily in other languages but which have Python bindings. But computer vision is a big subject, and any finite sized book needs some focus. The bibliographic references are more than enough if one wants to learn more. Certainly, whether you are a student, an ambitious app developer, or a "recreational" computer scientist like myself, Solem's book will be a useful and fun addition to your bookshelf.

Chris Lee-Messer
September 7, 2012


Technical information



The book is published by O'Reilly and has O'Reilly's distinctive look-and-feel with bullhead catfish as the animal on the front cover. I read the ebook version as a PDF without difficulty on computer screen. The style is easy to read, focused, but not overly formal. The quality of the editing was good and the code examples worked. Setup of the software and how to obtain the data sets used in the examples is covered in the appendices and is easy for an experienced Python user with an Internet connection. I'm not sure how easy it would be for someone completely new to Python, numpy, and scipy. Those with access to a well supported Linux distribution have it easy as all the packages are installed with a single click. Packaged distributions like Enthought Python (Windows, Mac, and Linux) and Pythonxy (Windows) get you most of the way.

1. Basic Image Handing and Processing
   - practicalities of using Python to manipulate images
2. Local Image Descriptors
   - Harris corner detector, Scale-Invariant Feature Transform, matching geotagged Images
   
3. Image to Image Mappings
  - Homographies, Warping Images, automated stitching of images to create panoramas

4. Camera Models and Augmented Reality
  - Pin-hole camera model, camera calibration, pose estimation, augmented reality

5. Multiple View Geometry
  - Epipolar geometry, computing with cameras and 3D structure, multiple view reconstruction, stereo images

6. Clustering Images
  - K-Means Clustering, Hierarchical Clustering, Spectral Clustering

7. Searching Images
  - Content-Based Image Retrieval, Visual Words, Indexing Images, Searching the Database for Images, Ranking Results Using Geometry, Building Demos and Web Applications

8. Classifying Image Content
  - K-Nearest Neighbors, Bayes Classifier, Support Vector Machines, Optical Character Recognition

9. Image Segmentation
  - Graph Cuts, Segmentation Using Clustering, Variational Methods

10. OpenCV
   - The OpenCV Python Interface, OpenCV Basics, Processing Video, Tracking

 

Sunday, October 31, 2010

nexenta core and NexentaStor 3

Our lab is outgrowing the HP Smart array storage that we currently use.  So we are looking at other options.  I like the ideas and capabilities behind ZFS a lot, so I thought I would look at OpenSolaris and Nexenta with the focus on Nexenta because it's not clear what will happen to OpenSolaris.

Nexenta is a mix of the OpenSolaris kernel with the Ubuntu 8.04LTS runtime. Both Nexenta core 3.0 and NexentaStor 3.0.3 were easy to install.  With the core platform, it's a bit tricky sometimes to figure which version of a command to use: something from Solaris or something from Ubuntu.  It makes it so I need to read Nexenta's web site,  the Opensolaris manuals and sometimes look at Ubuntu instructions.

NexentaStor is interesting--it is an appliance interface on top of the core platform with additional commericial plugins available.  The appliance approach is interesting because after setup, it would make easier for others  to administer the boxes when I leave the lab.  There is a free community license that I'm using to evaluate the product.

Tuesday, August 3, 2010

upgrading to vmware workstation 7.1 recompiling kernel modules

I've used vmware workstation since it was originally released and have it installed on linux and windows machines as far back as redhat 6.1 or 7 through ubuntu 10.04 and windows 7.
 I recently upgraded one of my ubuntu systems to 7.1 and discovered that the old trusty vmware-config.pl had been retired.  It wasn't completely obvious from the manual how to upgrade the host kernel modules so this is how you do it:

sudo vmware-modconfig --console --install-all


and we have smooth sailing once again.

Wednesday, June 2, 2010

Jessica is moving

Exciting events.  My favorite lawyer Jessica Lee-Messer has moved :-)  She is now a partner at Lee-Messer Greenberg Wanderman  Family Law.

Now, I just want a tour of her new offices!

Monday, March 8, 2010

testing rtai-3.7.1 linux kernel 2.6.29.4 setup on ubuntu 9.04

I wanted to see if it was reasonable to use linux/RTAI and comedi to perform realtime data acquisition in our lab.

Here's the setup.

Hardware:

Intel(R) Core(TM)2 Quad CPU Q9650 @ 3.00GHz
INTEL DP45SG motherboard
ATI Radeon HD 4550 rev0
(wireless card present)
[Purchased from System76 "Wild dog" with 8GB ram]

Compiled kernel per the instructions on RTAI site. I should probably have a separate post on this. But it basically followed the "Kbuntu" instructions.

Ran latency test on cpu for 5 or so minutes with kernel compiling 5 threads and running
graphics in chromium browser: load average: 7.56, 5.57, 2.90


It was pretty impressive. I could see the graphics processes grinding to a halt on the machine during the loaded run, but the max latency never went above about 4us (4062ns).


Summary statistics:
lat min|       ovl min|     lat avg|     lat max|    ovl max|    overruns
max       -1462         -1520         -1086          4062        4062          0
min       -1523         -1523         -1448          -711        2543          0
avg       -1515         -1523         -1244           554        3398          0
stddev      7.6           0.5          63.1        1009.8       383.5        0.0


top - 20:31:39 up  1:33,  7 users,  load average: 7.56, 5.57, 2.90
Tasks: 222 total,   8 running, 214 sleeping,   0 stopped,   0 zombie
Cpu0  : 99.0%us,  0.0%sy,  0.0%ni,  0.0%id,  1.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 99.0%us,  0.0%sy,  0.0%ni,  1.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  0.0%us, 75.1%sy,  0.0%ni, 24.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4161332k total,  3974080k used,   187252k free,   198388k buffers
Swap:  4482092k total,      176k used,  4481916k free,  3078760k cached

top - 20:31:39 up  1:33,  7 users,  load average: 7.56, 5.57, 2.90
Tasks: 222 total,   8 running, 214 sleeping,   0 stopped,   0 zombie
Cpu0  : 99.0%us,  0.0%sy,  0.0%ni,  0.0%id,  1.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 99.0%us,  0.0%sy,  0.0%ni,  1.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  0.0%us, 75.1%sy,  0.0%ni, 24.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4161332k total,  3974080k used,   187252k free,   198388k buffers
Swap:  4482092k total,      176k used,  4481916k free,  3078760k cached



Here's my short python script to take the output of the latency run:
#!/usr/bin/env python
# analyze_latency_log.py
# header format:
# RTH|    lat min|    ovl min|    lat avg|    lat max|    ovl max|   overruns
import numpy as np
f = open('latency-2.6.29.4-rtai371-ni64gb-running-load-cpu3.log').readlines()


fields = [line.split() for line in f]
numberstrs = [farr for farr in fields if len(farr) and farr[0]=='RTD|']
nrow = len(numberstrs)
ncol = 6
data = np.zeros((nrow,ncol))
lno = 0
for line in numberstrs:
line = [ss.replace("|","") for ss in line]
vals = [int(s) for s in line[1:]]
data[lno,:] = np.array(vals)
lno+=1


mx = np.max(data, axis=0)
mi = np.min(data, axis=0)
mm = np.average(data, axis=0)
va = np.sqrt(np.var(data, axis=0))

print  """         lat min|       ovl min|     lat avg|     lat max|    ovl max|    overruns"""
print  """max      %6.0f        %6.0f        %6.0f        %6.0f      %6.0f     %6.0f""" % (mx[0],mx[1],mx[2],mx[3],mx[4],mx[5])
print  """min      %6.0f        %6.0f        %6.0f        %6.0f      %6.0f     %6.0f""" % (mi[0],mi[1],mi[2],mi[3],mi[4],mi[5])
print  """avg      %6.0f        %6.0f        %6.0f        %6.0f      %6.0f     %6.0f""" % (mm[0],mm[1],mm[2],mm[3],mm[4],mm[5])
print  """stddev   %6.1f        %6.1f        %6.1f        %6.1f      %6.1f     %6.1f""" % (va[0],va[1],va[2],va[3],va[4],va[5])