ECE516 Lab 2 ("lab2"), 2024

XV (eXtended meta/uni/Verse) combines XR (eXtended Reality), XI (eXtended Intelligence), XB (eXtended Being, including Digital Twin), XE (eXtended Economy), and XS (eXtended Society).

This work will lay the foundation for comparametrics, i.e. understanding "What does a camera measure?". Lab 1 aksed "What is a camera" whereas Lab 2 will take a first step towards understanding "What does a camera measure?" and more generally, sensing and metasensing.

That fundamental question about what a camera measures is what led S. Mann to invent HDR (High Dynamic Range) imaging in the 1970s/80s, and develop it further at MIT in the early 1990s, as well as to invent Metavision, the predecessor of the metaverse.

Most research on image processing fails to fundamentally address the natural philosophy (physics) of the quantity that a camera measures, e.g. at the pixel level, nor does it really convey what a camera does in terms of being a sensor which we can sense (sensing of sensing). In some sense the research gets lost in a "forest of pixels" and can't see the "trees" (individual pixels) for the "forest" (array of pixels).

You can't do a good job of studying forestry if you only look at forests and never individual trees. If we were to begin deeply understanding forests, we might wish to begin first by really undrestanding what a tree is.

Likewise we'll never really understand what an image or picture is if we can't first understand what a pixel (picture element) is.

Mann's philosophy that led to the invention of HDR was to regard a pixel as a light meter, much like a photographic light meter. Under this philosophy, a camera is regarded as an array of light meters, each one measuring light coming from a different specific direction. With that philosophy in mind, differently exposed pictures of the same subject matter are just different measurments of the same reality. These different measurements of the same reality can be combined to achieve a better and better estimate of the true physical quantity. This quantity is neither radiance, irradiance, or the like (i.e. it does not have a flat spectral response), nor is it lumminance or illuminance or the like (i.e. it does not match the photometric response of the human eye). It is something new, and specific to each kind of sensor. It is called the "photoquantity" or the "photoquantigraphic unit", and is usually denoted by the letter q. Often we'll regard the camera as an array of light meters, perhaps sampled on a 2-dimensional plane, thus giving us a array of photoquantities q(x,y), over continuous variables x and y, which may be sampled over a discrete array of samples, which we might write as q[x,y] using square brackets to denote the digital domain even as the range might remain analog, or at least "undigital" (Jenny Hall's take on "undigital"). A typical camera reports on some function of q, e.g. f(q(x,y)) or f(q[x,y]).

In order to really fundamentally understand what a camera measures, we're going to build a really simple 1-pixel camera from the photocell that's included in the lab kit. We'll calibrate and understand the camera using a simple 1-dimensional array of lights. Since the camera has only 1 pixel, we won't be distracted by spatially varying quantities and thus we won't be distrated by the domain (x,y) or [x,y]. This will force us to focus on the range, q, and this range is in fact the domain of f. Thus our emphasis here is on dynamic range!

S.W.I.M. = Sequential Wave Imprinting Machine

We begin by building a SWIM = Sequential Wave Imprinting Machine which is a scientific "outstrument" (opposite of a scientific instrument in the sense that it displays outside of the box into which it is built rather than displaying "in the box").

The SWIM will be constructed from an addressable LED array and any suitable microcontroller... we'll use as case-study, ESP32 (WROOM) such as ESP-WROOM-32, ESP32-S with WiFi and Bluetooth, included in the lab kit, or any equivalent that you can get from any of a wide variety of vendors, which will make it easy to directly connect to the metaverse and extended metaverse. The lab kit also includes a strip of addressable LEDs. Alternatively you can buy from many different vendors, e.g. digikey.ca DigiKey Part Number 1528-1636-ND.

Marking:

Assemble the lab kit and show it working, e.g. "SWIM" out your name, e.g. make a cool picture such as a profile picture. Easy part of the lab for up to 10/10, plus ambitious part of the lab for more than 10/10, e.g. by adding a couple of bonus marks.

To post your results, click "I Made It!" here: https://www.instructables.com/SWIM-Sequential-Wave-Imprinting-Machine-Lightpaint/

Optional fun: you can use the SWIM to do the photocell experiment and compare with other data gathered in a previous year's lecture (link) and with the data gathered from the Blue Sky solar cell (link)

See also the Photocell Experiment. and the Instructable entitled Phenomenological Augmented Reality:


References:
Prof. Wang's reference document
Kineveillance look at Figures 4, 5, and 6, and Equations 1 to 10.
• The concept of veillance flux (link);
• (optional reading Minsky, Kurzweil, and Mann, 2013);
• (optional reading Metavision);
• (optional reading Humanistic Intelligence, see around Figure 3 of this paper)
• (optional reading: If you like mathematics and physics, check out the concepts of veillance flux density, etc., here, see Part 3 of this set of papers: Veillance)
• optional reading: 3 page excerpt from comparametrics HDR book chapter, http://wearcam.org/ece516/comparametrics_scalingop_3page_book_excerpt.pdf
• optional reading: Adnan's notes for invited guest lecture 2023feb09: http://wearcam.org/ece516/comparam_lecture_adnan_2023feb09