Forums

意外分离的“过程”交互[长]

开始于 唐Y 2016年7月10日
[Apologies to folks who end up seeing this in multiple places:  USENET,
forums, mailing lists and personal email.  Similarly, apologies to
folks only seeing it in *one* such place and, possibly, missing out
on comments introduced in other venues.  And, apologies if the technology
may be unfamiliar to some.  I'm in a rush to get this sorted out so I
can get back to "real" work... including yet another /p.b./ project!  :< ]

[For those of you directly involved in this, you'll recognize much of the
text as lifted from our past emails, names elided.  I include it here for
others who won't be aware of the design approach.  Also, I plan on having
the Gerbers and BoM's ready by Monday and hope to get a good start on the
specifications by then, as well.  Expect them to appear a week later.  I
hit a snag in another project so this one suffered, a bit.  :<  It would
be great if someone could offer to draft a user manual from them!  (hint,
hint)  (HINT, HINT!!!)  I'd really like to see this installed by September
as I have some other commitments and guests coming online at that time.
And, this would be REALLY cool to showcase!]

------------

I have a quick-and-dirty little project that I probably should have
begged off -- but, it so closely resembles work that I'm doing on
another project that I opted to take it on.  /Carte Blanche/ is an
excellent motivator!  :>

As with other projects in recent past, it's a distributed control
system.  Unlike other projects, (almost) all of the "application"
software resides in a central "controller": a COTS SFF PC running
a baremetal MTOS with no physical I/O's beyond disk (for persistent
store) and NIC (for RPC's).

The "remote" nodes are relatively simple (dirt cheap) "motes" that do
little more than hardware and software "signal conditioning" for the
physical I/O's that they happen to have available, locally.  This
saves *thousands* of dollars of cablng costs wire and labor (which
often has to be done by a union electrician, "inspected", etc.).

A (portion of a) local namespace might appear to be:

     /devices
        /audio
           /microphone
           /speaker
        /display
           /screen
           /backlight
        /touchpanel
        /switches
           /1
           /2
           /3
        /lamps
           /1
           /2
           /3
        ...

All of these are exported to the central controller FROM EVERY NODE.
I.e., as if each was a "network share", "NFS export", etc.  (in some
cases, some or all may be exported to other nodes as well!  E.g., node1
might act as the user interface for a "headless" node5)

The central controller builds namespaces from this composite set of
exports.  E.g., one such might be:

     /devices
        /audio
           /node1
              /microphone
              /speaker
           /node2
              /microphone
              /speaker
           ...
        /display
           /node1
              /screen
              /backlight
           /node2
              /screen
              /backlight
           ...

while another, equally valid and *concurrently* active might be:

     /devices
        /node1
           /audio
              /microphone
              /speaker
           /display
              /screen
              /backlight
        /node2
           /audio
              /microphone
              /speaker
           /display
              /screen
              /backlight
        ...

Still one more might be:

     /devices
        /audio
           /talk        ("microphone" from node 1)
           /listen      ("speaker" from node 5)
        /display
           /screen      (from node 3)
           /backlight   (from node 3)
        /touchpanel     (from node 3)
        /switches
           /left-limit  ("3" from node 8)
           /home        ("3" from node 2)
           /right-limit ("1" from node 7)
        /lamps
           /red         ("1" from node 5)
           /green       ("2" from node 4)
           /blue        ("1" from node 3)
        ...

while another task has a different SET of the SAME NAMES -- but
bound to different "node instances".  I.e., the same identical
piece of code running in that "another task's" namespace would
result in "identical" activities -- but taking place on an
entirely different set of node I/O's using these different
namespace bindings.

[This is really slicker than snot!  Tasks need not be concerned
with where their I/O's are located nor worry about interfering
with the activities of other tasks:  if a resource isn't bound
in *your* namespace, there is NOTHING you can do to interfere
with it nor any concern that *it* might interfere with *your*
activities!  The protection domains are absolute -- much like
operating in a chroot(8) jail!]

With this, I can build little "virtual machines" (poor choice
of term as it has alrady been appropriated for other DIFFERENT
use) out of components from anywhere in The System.  E.g.,
I could bind "/switches/3" on node *1* to "/button"; and "/lamps/1"
on node *5* to "/bulb" and then use

     if (/button == ON)
         /bulb = ON

to implement a "remote" light switch.  No hassle of sending messages
across the network to specific IP addresses, implementing an IDL,
marshalling RPC arguments, etc.

At the same time, I can have another task that works in a richer
namespace -- e.g., the third one, above (introduced as "while
another") -- that can do:

     for (N in 1..number_of_nodes)
         for (lamp in 1..3)
             /devices/nodeN/lamps/lamp = OFF

to implement a "master reset" (after ensuring that no other task
tries to turn any of these ON after having done so)

[This is important as it shows how the multiple displays, etc.
can be handled from a central controller without ever burdening
any individual task with knowledge of more than a *single*
display, etc.]

The takeaways/executive summary:
    - resources appear in namespaces
    - anything that doesn't appear in a namespace can't be referenced
      (i.e., *accessed*!)
    - any new INDEPENDANT namespace can be constructed by binding
      names from an existing namespace to NEW names in the new
      namespace
    - a resource may appear in multiple namespaces concurrently
      and with different names
    - resources can exist on different nodes without that being
      reflected in the construction of ANY of the namespaces in
      which they are enumerated
    - having a name (handle) for a resource allows it to be accessed
      without regard for physical location (!)

There's a small amount of "work" involved implementing this to
create the "device interfaces" (i.e., the software that makes a
particular digital output look like "/lamp/1" and control its state
via "ON" vs. "OFF" -- or "DIM", "FLASH", etc.).  Much of the
application effort involves deciding how to split the namespace
into task-specific namespaces (i.e., limiting what any task can do
to ONLY the things that it SHOULD be able to manipulate and query;
a task that expects to send and receive characters via a serial
port doesn't necessarily need to be able to alter the baud
rate or other characteristics of the PHYSICAL interface!).

For example, if a task shouldn't be able to access a resource, then
it's namespace shouldn't include any references BOUND to that
resource!  If a task shouldn't be able to perform a particular
operation on a particular resource (e.g., never turn it "OFF"),
then an agency/proxy can be created to process requests from
the task and dispatch APPROVED requests to the actual resource;
the "bare" resource never appearing in the constrained task's
namespace but the proxy appearing, instead!

The biggest problem lies in the use of a central controller in the
implementation.  While it makes the nominal application much easier
to implement (more resources, richer development environment, etc.),
it complicates the sharing of physical resources on the individual
nodes!

I.e., each *node* can access its own *local* "namespace" (and, if
I so choose, even portions of namespaces on other nodes -- in much
the same way that the central controller does!).  This allows
some failsafes to be locally implemented (e.g., "turn off ALL
lamps if they've been left on for more than 12 hours" or "if
unable to contact node X for more than 23 minutes, flash /lamp/2
at a rate of 1 Hz")

THE PROBLEM

The real issue lies with the user interface devices -- display
(screen,touchpanel), audio (in/out), etc. (others not mentioned
in my above descriptions).  They are shared resources that may
*not* want to be "uniquely held" by a particular agency/task.

E.g., while the central controller (aka "main application") typically
"holds" all of the user interfaces (as *it* is interacting with the
various users), an asynchronous "event"/exception may be raised
in a local node that needs to be conveyed to the "application"
(usually, with some urgency).  In a uniprocessor/tighly coupled
SMP implementation, the communication is local and reliable.
The application layer eventually recognizes the event and
decides how the information should be conveyed to the user
and/or the event handled.

The application is AWARE of the event!  (i.e., if it needs to
do something beyond the notification, it *can* do those things)

In the distributed case, that may not be true (e.g., the "event"
may be "Communication Failure").  So, it may not be possible for
the local node to post the event to the central controller and
the central controller to turn around and "announce" the event to
the user *at* the local node (because the controller holds the
display resource for that node).

The local node may, thus, need to borrow/override some portion
of a user interface to convey that information to (and elicit an
acknowledgement from) the user WITHOUT *depending* on the central
controller to perform that interaction.

The local node has no knowledge of the semantic content of *its*
display -- the controller has been painting it based on the
needs of the application as interpreted and implemented by the
controller.  So, the local node has no idea as to which portions
of the display might be precious AT THE CURRENT MOMENT.  Yet,
it has to use *some* portion of the display to communicate this
to the user!

[Note "display" can be a visual or aural indication; same issues apply]

I liken this to early Windows (printer) drivers that would throw up
a modal dialog to complain about the printer being out of paper/toner.
The "driver" unilaterally decided to grab a portion of the display
device WITHOUT REGARD for what the user was doing at the time.
(i.e., there's no way the driver could KNOW what the user was doing
or the importance of individual pieces of display real estate!)
And, it would do so in the most "noticeable" place on the display
surface -- right where the user was typically looking at the time!

Then, to add insult to injury, took the focus away from <whatever>
the user was doing to DEMAND acknowledgement from the user -- as if
a paper shortage was the most important issue the user was facing, at
that time. (from the driver's perspective, it probably *was*!)

[Note that this has been "fixed" to present those notifications
elsewhere -- in a specific location of the display (Tray) and only
AFTER the user expresses an interest in knowing the details of the
"alert"!  Fine -- if you've got the display real-estate to spare!]

This also presents opportunities for undesired behaviors to creep
into the UX -- the user may be in the process of "doing something"
based on the screen's contents an OhNoSecond prior to the dialog
appearing (e.g., pressing a key, clicking on something, etc.)
and can't stop himself quickly enough for the action to be processed
by the interrupting, focus stealing dialog.  He may never even *see*
the dialog if his eyes are elsewhere (e.g., engaged in some activity
scripted from written notes placed on the worksurface).

There are two aspects of this that are consequential to the
distributed nature of the implementation:
- redirecting the interpretation of the user's actions to
   the proper "context" (local focus vs. normal, remote focus)
- informing the "original" overlaid application that the focus
   has now shifted (i.e., if the application was expecting an
   acknowledgement in a particular time period, that may not be
   forthcoming because the request/indicaation for that action
   it is not APPARENT to the user) -- there may not be an operable
   communication path between the two parties!

As it's a control system, neither the local nor remote "activities"
can really "pause".  If a motor is in motion, it must stop before
it causes damage.  If a user must interact with a mechanism, that
must be done before the mechanism advances to another stage in its
operation.  Etc.  Just because the UI is "interrupted" doesn't
mean the process will be!

In the past, for "richer" implementations, I've reserved a portion of
the user  interface for "express messages" -- to ensure that they can
always be seen WITHOUT COMPETING with the normal display content
provided by the application.  In my HA project, I use a spatialized
"display" to present specific indicators/annunciators in different
parts of the "space" around the user's head.  So, a "chime off to the
left" can indicate one thing while a "buzz" off to the right can indicate
something else.  I rely on the character and location of the sound
to reinforce the user's "remembrance" of the event despite his being
actively engaged ("focused") on some other activity.

Here, I don't have the physical resources to implement such a static
"reservation".

One possible approach is to just overlay <whatever> and wait for
some sort of acknowledgement.  The argument being that if the user
isn't WATCHING the display (to acknowledge this asynchronous
notification), then he's not MISSING anything in the overlaid
application, either!  And, if he's dilly-dallying in addressing that
event, then he "deserves" the consequences of the ignored/overlaid
message!

Another approach is to alter the display in some unique way
(invert it, flash it, etc.) to draw attention to the fact that
a notification is pending.  Then, await the user's explicit
acknowledgement to present (and eventually dismiss) that.  I.e.,
RESERVE the "blink attribute" for this indication.

Still another is to use an alternate user interface channel to convey
this alert (e.g., something on the audio to indicate the presence
of pending video; something in the video to indicate the presence
of pending audio!)

An even more Draconian approach might be to add a "special indicator"
("check engine light", buzzer, etc.) that serves this purpose (but,
that adds to recurring cost).

[Note that I've not mentioned these sorts of interactions/notifications
BETWEEN nodes.  E.g., when nodeX is acting as the UI for nodeY!]

Preferences?  Anything I've not considered?
On 10/07/16 16:11, Don Y wrote:
> [Apologies to folks who end up seeing this in multiple places: USENET, > forums, mailing lists and personal email. Similarly, apologies to > folks only seeing it in *one* such place and, possibly, missing out > on comments introduced in other venues. And, apologies if the technology > may be unfamiliar to some. I'm in a rush to get this sorted out so I > can get back to "real" work... including yet another /p.b./ project! :< ]
-----snipped because the news server found the quote too long
> [Note that I've not mentioned these sorts of interactions/notifications > BETWEEN nodes. E.g., when nodeX is acting as the UI for nodeY!] > > Preferences? Anything I've not considered?
Maybe not quite what you wanted as feedback but as I read your post I thought hmm Hypercard aah Hypercard yes Hypercard Hypercard is dead, of course and I found the clones unconvincing when I looked at them some years ago. But conceptually there might be something useful. Regards Werner Dahn
On 10/07/16 19:11, Don Y wrote:
> ... handled from a central controller...
Why should the control be centralised? I can think of many situations where you need different things controlled from different places.
> without ever burdening > any individual task with knowledge of more than a *single* > display, etc.]
In many cases, more than a single display is desirable. For example, most A/V systems have their own display, but can also be controlled from a phone. (note that the phone sends commands, it's not actually a central controller).
> The takeaways/executive summary: > - resources appear in namespaces
Good idea. Not a new idea.
> - any new INDEPENDANT namespace can be constructed by binding > names from an existing namespace to NEW names in the new > namespace > - a resource may appear in multiple namespaces concurrently > and with different names
Good ideas. Not new ideas.
> The biggest problem lies in the use of a central controller in the > implementation.
So drop it. Allow devices to export their control interfaces (in a discoverable way, see below) and allow other devices (plural) to send commands to those.
> THE PROBLEM > The real issue lies with the user interface devices
No. That's easily solved by use of a "dialog manager", which virtualises the human-computer communication needs, and adapts them to the available display hardware. Again, these are old ideas, at least as old as the 1980's. Apollo even had a product called "Dialog Manager".
> I liken this to early Windows (printer) drivers that would throw up > ... > Then, to add insult to injury, took the focus away from <whatever>
That's because the device driver was operating at a level below the dialog manager (in this case, the Windows UI). As you say, the problem was solved by hooking it up differently, making the Windows UI available to the driver.
> One possible approach is to just overlay <whatever> and wait for > some sort of acknowledgement. > Another approach is to alter the display in some unique way > (invert it, flash it, etc.) to draw attention to the fact that > a notification is pending.
These are all just "human factors" design questions. They're complicated by the need to manage parallel processes - and to avoid switching the user's train of thought needlessly - but they must be tackled by modeling the communication on the user's mental processes, not on the hardware or the physical implementation. That's what a dialog manager must do.
> Preferences? Anything I've not considered?
In my opinion the interesting problem here is how nodes and controllers can discover the capabilities present in the network. Mere enumeration (like USB device enumeration) is not enough - that just shows what devices exist, not what purpose they serve or even how they are connected. Discovery by category (as implied by your namespaces) is not enough. It needs to be richer than this. DNS-SD is an example of a design that tries to solve this problem; it allows sending a query like "where is the closest A3 color printer to me?". I'd love to see an A/V system with this kind of auto-discovery. When I hit "Play" I want the system to know which room I'm in, and to turn on the right amplifier and/or screen and route the audio to the right devices. I don't want to juggle half a dozen remote controls just to play a sound. I don't want to figure out which devices need to be turned on, to set up the right input channel selectors, or find the right volume control to adjust. I want to be able to do this from *any* controller that comes to hand, including my phone. There needs to be an industry-standard protocol for this stuff.
I hate it when someone posts a long tome, and you spend a long time 
responding, and the OP never returns to the thread or engages with your 
response.
On Thu, 14 Jul 2016 10:17:18 +1000, Clifford Heath
<no.spam@please.net> wrote:

>I hate it when someone posts a long tome, and you spend a long time >responding, and the OP never returns to the thread or engages with your >response.
Don mentioned in the "task loop" thread that he'd be busy for several days. Be patient ... I'm sure he'll return to this. George
On 7/10/2016 5:24 PM, Clifford Heath wrote:
> On 10/07/16 19:11, Don Y wrote: >> ... handled from a central controller... > > Why should the control be centralised? I can think of many > situations where you need different things controlled from > different places.
My original quote: "This is important as it shows how the multiple displays, etc. can be handled from a central controller without ever burdening any individual task with knowledge of more than a *single* display, etc. You appear to be conflating "control" with "controller". The control *algorithm* resides in a single CPU. Why shouldn't it? Distributing the algorithm adds a fair bit of complexity for very little gain (in this case -- the "process" isn't that taxing that it needs lots of MIPS). And, the "displays, etc." can be HANDLED from that (single) controller. No mention of how many or where they are located -- just that they are "multiple" and *controlled* (i.e., DRIVEN) from the central controller. The *I/O's* are where the inefficiency traditionally resides. Typically, one or more (24") equipment racks are located in a "control room". The racks house the "controller" and a boatload of (costly, standardized) I/O interfaces. The I/O interfaces are tethered to the sensors and actuators in the field by *miles* of (typ) #18-20AWG wire run through cable trays to connect to the sensors located 100+ "electrical feet" away from the controller. (100' of wire can travel a surprisingly SHORT distance when it has to be "routed" in a non-point-to-point fashion!) I.e., 25 I/O's consume a mile+ of conductors (a single 100' pair for each). All of these I/O's terminate on large barrier/terminal strips/DIN rails in the back of the equipment rack(s). From there, travelers connect the strips to the actual I/O interfaces which, in turn, are connected to the controller/display/UI/etc. (wiring the interfaces to the terminal strips can take a work-week (!) -- noting that each conductor must be individually labeled to ensure each end can be replaced to its intended location if ever disconnected, dressed in an appropriate harness, etc. And, you haven't BEGUN to address the field wiring!) [You aren't "mass producing" these equipment racks as they tend to have their hardware "tuned" to the particular installation. The number and types of each I/O can vary from one installation to the next. Even within a given "facility".] The equipment rack becomes a piece of furniture. It can't practically be moved due to the girth of its field wiring harness. And, the labor involved in MOVING those terminations to a new location! This tends to make the "control room" application specific. Or, cause the control room to migrate into the physical process space (which can have some advantages -- but at some cost!) Some of the I/O's will not be capable of driving long cable runs. So, often "black box" signal conditioners are added *at* the (remote) sensors/actuators -- just to get the signals to/from the equipment rack. All of these little boxes (in the field and in the equipment rack) tend to have idiot lights to give you a reassurance that they are powered up, working properly (e.g., if an input "goes open", you want to see some indication of that cuz the controller may have no way of knowing that "for fact" -- a 4-20mA sender can report open/short but only if there is a data path to the controller for that information!). To combat this wiring nightmare, move the signal conditioning AND data acquisition *into* the field, proximate to the sensors and actuators to which they interface. Send "messages" back to the central controller instead of the actual physical *signals* involved. The controller then needs no I/O's -- other than user interface, persistent store and communications link. No more equipment rack. No more miles of #18AWG -- strung by union electricians, etc. No more "buffers" along the way. Just lots of little "I/O servers" tied to the sensors (that you had to purchase, regardless!) [Nothing "new" here, either. There are indu$trial control bu$$e$ that do the$e $ort$ of thing$] Typically, a "Supervisor" is responsible for monitoring the process; dealing with exceptions that the controller can't practically address. An "Operator" is often available as there are times when someone needs to actually put eyes/hands on an actuator/sensor in the field (while someone else continues to shepherd the process). [Sometimes, Operators/Supervisors are shared among simultaneously running processes] If, for example, the Supervisor notices something amiss (or, is informed of something wonky by the controller), *he* has to figure out how to resolve the problem -- usually without halting the process! [Doing so can cost you 4-8 hours of production; not something The Boss would like, especially if the problem turned out to be a clogged pitot tube, gunked up pump impeller, etc. -- things that could be fixed or replaced (swapped out) without compromising the "process"] He can run whatever diagnostics the System Designer gave him. And, he can look at the idiot lights ("check engine") inside the equipment rack to try to get a feel for where the problem might lie. But, often, he'll need to dispatch an Operator to check on some physical aspect of the process. Something that can't be inferred from an observation of idiot lights -- nor rectified without "hands on": "Bill, why don't you climb up to the inlet air handler in the mezzanine and see why I'm getting these low temp readings. Maybe we've got a bad RTD up there..."
>> without ever burdening >> any individual task with knowledge of more than a *single* >> display, etc.] > > In many cases, more than a single display is desirable. > For example, most A/V systems have their own display, > but can also be controlled from a phone. (note that > the phone sends commands, it's not actually a central > controller).
"without ever burdening any INDIVIDUAL task with knowledge of MORE THAN A *SINGLE* display, etc." Being able to partition the namespace makes it intuitive to exploit independent "machines", each with a particular set of responsibilities (incl UI's). Just like creating a "process" with (stdin, stout, stderr) defined by its PARENT, you can create a namespace appropriate for a particular undertaking and pass that to the task responsible for that undertaking. The task then doesn't have to deal with identifying its "display", "keyboard", etc. E.g., update_display() vs. inlet_air_temperature_control_loop(), pump_flow_rate_control_loop(), etc. [All of this, of course, is "obvious" -- with the caveat that the application hasn't just been partitioned into smaller "tasks" but, rather, that the address space, name space, communication space, etc. have also been appropriately decomposed. It's AS IF there were a bunch of individual co-operating machines working on the problem, each ISOLATED from each other except for very visible communication paths -- not just a set of "tasks" (which may or may not be truly isolated).] As adding yet another "machine" is trivial, there are natural consequences that can be exploited in the design! Note that each of these "black boxes" in the field can't *just* be a black box; too much information would be hidden within that would have a dramatic impact on the operation and troubleshooting of the "process" (application). Whereas the signal conditioning boxes originally had "idiot lights", these boxes can have comparable function. But, these can have a variety of I/O's instead of some "store bought" notion of how many of which type SHOULD be supported in a single "box". E.g., a 4-20mA sender for *one* process variable. A single "check engine" light would be useless. Note that the signal conditioners in the equipment rack had the advantage that the "system console" was nearby so a user could *probe* some wiring and correlate that to reported conditions on that display. In the remote case, the console isn't (easily) accessible! Unless you have the equivalent of a "remote display" (and a means by which it can communicate with the REAL console) And, you want a display that can be reasonably generic -- not tied to a particular type (or mix!) of I/O's. I.e., a digital input conveys different information than an analog one. An LED might suffice to convey the sensed state of an input (or commanded of an output) but would be ineffective at conveying an analog reading or setting. So, add a "real" display -- though not a "full featured" display (those are more costly *and* aren't typically used "in operation" for anything more than signalling faults, etc. -- don't piss away monies needlessly!) And, in keeping with the leanness of the I/O servers, put the *minimum* amount of support in the node for the display. No need to render fonts, draw graphic primitives, etc. Just export the framebuffer as a resource. Let a task running in the central controller scribble on /node5/framebuffer just as easily as it would on /console/framebuffer! NOTHING "extra" to support that capability! We're not trying to play full motion video so the bandwidth requirements are insignificant. [Nothing new *here*, either! /cf./ Sun Ray, Pano Box, etc.] Likewise, export an interface to any "user input" devices (even if it's just a couple of "soft buttons" alongside the display surface!) that reside on the remote(s). Again, NOTHING extra to support them that isn't already needed elsewhere in the design! During normal/nominal operation, the display can indicate "OK" -- or whatever -- to summarize that everything that this node handles is operating properly. Or, "ERROR" if a fault is discovered. In each case, a task IN THE CENTRAL CONTROLLER is painting that "image" into the display. The local node has no idea what it *means*! Because the *controller* understand the users, it can opt to paint informative messages on the remote display (as well as on the system console): "Inlet air temperature low. Test pin #6" (because the controller knows how each I/O is wired to each particular remote device along with its "application specific" name -- not just "analog input #2") As this STILL underutilizes the display's abilities, that task (the one that has it's "/display" bound to *this* node's "./framebuffer") that is responsible for updating *this* display -- and, likely, no others! -- might, instead, normally opt to display the current values for all sensors and settings for all actuators/effectors served by this node. Possibly in a predefined sequence if display real-estate is scarce. And, in units of measure that are appropriate for the application! "Inlet air temperature: 38C" "Inlet air moisture: 35g/m^3" "Blower speed: 200RPM" "Outlet air temperature: 30C" ... As such, an Operator nearby (by coincidence or having been explicitly dispatched by the Supervisor) can see these values and reassure himself that all is well. Or, get advanced warning of a pending fault before it becomes enough of a problem for the controller to require remedial action. With the controlling task monitoring the user INPUT controls that have been exported by that node, the Operator can interact with that task -- though on a more limited scale than possible from the system console (because this is not as "rich" a resource as that!). When troubleshooting a "low temperature" condition on an inlet air sensor, the Operator might *direct* the task executing on the central controller (but using the remote's user i/f!) to: - !display the current temperature being sensed by "this" RTD - "Hmmm... seems really low!" - (cautiously puts hand on the physical mechanism in question...) - "Yeah, it *is* really cool! So, maybe it's not a temperature sensor fault but, rather, the heating unit in the air handler located up on the mezzanine might be misbehaving!" - !display the current setting/status for the heating unit - "Hmm... it claims to be off. So, no wonder this mechanism is cool!" - !command heating unit ON - (twiddles thumbs for a while) - "Nope, mechanism is still ice cold. Asking the heating unit to report its status *claims* everything is OK. I guess I'll have to go PHYSICALLY check the heating unit..." - checks wiring, probes for voltages on the I/O connectors - "Hmmm... no power, here. Either this contactor/OPTO22 has failed or there's a tripped breaker, somewhere..." - now, using a *different* UI (proximal to the "distant" air handler) he continues interacting with the central controller's "Diagnostic" process to further resolve THAT problem Note that the user doesn't have to keep track of *which* heater to command "on" ("Hmmm... is it heater #3? Or, #4?") because the diagnostic task that is presently bound to that UI has been designed with that context in mind! It's *obvious*, to it (and the parent that spawned it AFTER creating the namespace for it to use) that the only heater that makes sense is the one that is associated with that *control* task -- and, now, *diagnostic* task! Each of the displays involved (reaction vessel air sensor, inlet air handler and system console) operate independently of each other -- and the actions initiated by the user at each are subjected to constraints that the *process* might be required to impose ("Sorry, I can't turn the heater on, now, because a dehydration process is active")
>> The takeaways/executive summary: >> - resources appear in namespaces > > Good idea. Not a new idea. > >> - any new INDEPENDANT namespace can be constructed by binding >> names from an existing namespace to NEW names in the new >> namespace >> - a resource may appear in multiple namespaces concurrently >> and with different names > > Good ideas. Not new ideas.
*NONE* of these are new ideas! Display servers have been around for 35+ years. I'd imagine it's not a stretch from there to generic "I/O servers" (i.e., a temperature server, a hygrometer server, a motor server, a manometer server, etc.) Process containers/protected namespaces for probably a decade+ more than that. Ditto multitasking (though "threads" post-date processes). Naming resources in a UNIFIED namespace eventually gave way to *isolated*/protected namespaces (in much the same way that UNIFIED address spaces were broken into *isolated*/disjoint protection domains). Transparent support for remote resources in the "local" namespace (i.e., the developer doesn't know if "/uart" is a local resource or a *remote* one -- let alone how to "discover" it!) Language support for IPC/RPC/RMI without all the low-level related crud (i.e., "channel <- message" instead of "manually" setting up a connection/socket, resolving addresses/ports, crafting an IDL and IDL compiler, marshalling arguments, etc.) All *OLD* technologies! Yet, note how few designs avail themselves of these mechanisms! As if "multitasking" was the only tool that could help design simpler, more robust/maintainable systems. :-/ [And, the "excuse" that these mechanisms are "expensive" is a dodge! I've been steadily migrating them to smaller and smaller implementations over the past 15+ years.]
>> The biggest problem lies in the use of a central controller in the >> implementation. > > So drop it. Allow devices to export their control interfaces > (in a discoverable way, see below) and allow other devices > (plural) to send commands to those.
Doesn't make sense to have the remotes tell the system what they are and how they are used. The *system* has been designed with a specific set of I/O's and requirements in mind. The remotes have been SELECTED to fulfill those needs. If the system could "discover" an extra air handling unit, it wouldn't know how to *use* it! (where are its ducts plumbed in relationship to the process air flow? what ROLE had the system designers envisioned for this AHU? Is it to bolster control of temperature? moisture? Reduce the static pressure on an upstream AHU? etc) A remote simply indicates its presence (MAC) on POST. The system consults its configuration management subsystem to "give meaning" to each of these devices -- as well as verify that those that are required are, in fact, present. [You don't just "install" a node; you have to "introduce" it to the CM subsystem so the system as a whole knows how it will be using it]
>> THE PROBLEM >> The real issue lies with the user interface devices > > No. That's easily solved by use of a "dialog manager", > which virtualises the human-computer communication needs, > and adapts them to the available display hardware. Again, > these are old ideas, at least as old as the 1980's. > Apollo even had a product called "Dialog Manager".
That means nodes need to understand user interactions. They don't understand "temperatures" or "pressures" or "flow rates" -- but you want them to understand user interactions? They exist just to "save wire" (the signal conditioning and data acquisition hardware would have been present, regardless! It's just been moved into the field instead of the equipment rack). Its far easier to just let the remote node be an "I/O server" (including display/keyboard server). These sorts of user interactions are already, typically, included in the applications' designs -- but, always from the "system console"; from the *single* user. I.e., historically, the Supervisor can command the inlet air heater ON from the central console; letting an Operator do it from somewhere in the field is not significantly different. Esp if there is virtually no (software) cost to providing that "remote UI"!) What you want, IMO, is a consistent way of presenting "exceptions" to the "remote" user. So a remote display that wants to complain of a shorted RTD *OR* a communication failure presents that information to the user in a consistent manner. I.e., the former can be notified with the help of the central controller (as above). But, the latter can't (cuz the controller is inaccessible) -- it has to be "handled" by the remote node itself!
>> I liken this to early Windows (printer) drivers that would throw up >> ... >> Then, to add insult to injury, took the focus away from <whatever> > > That's because the device driver was operating at a level below > the dialog manager (in this case, the Windows UI). As you say, > the problem was solved by hooking it up differently, making the > Windows UI available to the driver. > >> One possible approach is to just overlay <whatever> and wait for >> some sort of acknowledgement. >> Another approach is to alter the display in some unique way >> (invert it, flash it, etc.) to draw attention to the fact that >> a notification is pending. > > These are all just "human factors" design questions. They're > complicated by the need to manage parallel processes - and to > avoid switching the user's train of thought needlessly - but
They are *easier* in the distributed approach because the user isn't trying to mix "operational activities" (like monitoring the running process) with "diagnostic activities". It's the equivalent of overlaying a "diagnostics window" on the system console so the operational issues are no longer "a distraction"... (except you can't do that as someone has to be minding the store!)
> they must be tackled by modeling the communication on the > user's mental processes, not on the hardware or the physical > implementation. That's what a dialog manager must do. > >> Preferences? Anything I've not considered? > > In my opinion the interesting problem here is how nodes and > controllers can discover the capabilities present in the > network. Mere enumeration (like USB device enumeration) is > not enough - that just shows what devices exist, not what > purpose they serve or even how they are connected. Discovery > by category (as implied by your namespaces) is not enough. > It needs to be richer than this. DNS-SD is an example of a > design that tries to solve this problem; it allows sending > a query like "where is the closest A3 color printer to me?".
Doesn't apply. The *process* (application) is the thing that assigns meaning to resources. The "nearest heater" means nothing to the nodes (or the system) -- regardless of how you define "near". What the application cares about is *roles* associated with specific resources. E.g., there may be a temperature sensor on the output of the inlet air handler -- used by the control loop that regulates inlet air temperature and moisture. In some installations, this might ALSO act as the "input air temperature" for the reaction vessel. Or, if there is too much "transport" (ductwork), another sensor might be located *at* the reaction vessel to ensure more accurate knowledge of the ACTUAL temperature of air entering the process. At the same time, the input air temperature might be addressed with a cascaded control loop to ensure this "input process air" is at the desired operating point. The application needs to know these things to make tuning more efficient, adjust alarm response times, etc. If the application discovered that it had a "gun turret azimuth motor" available, it wouldn't know how to shoot down enemy aircraft! :> A colleague has suggested what might be a clever hack to address the problem. But, I think it will require a change to the hardware. Of course, the photoplots went off a week ago to the board house! It sure would have been nice for the solution to have appeared earlier! My error being the assumption that I'd "fix it in software" without acknowledging that some things *can't* be fixed, there! (e.g., hard to add a "motor" to a device just by tweeking the code!) I've spent the past couple days looking for cheap ways to patch the existing design (so I don't lose the time that was spent getting the boards). But, I suspect I will soon have to switch to figuring out how to *modify* the design and layout some new artwork. Then, update the specs (this past week's effort :< ) to reflect this. (sigh) "The best laid plans..." Trying to get things done quickly inevitably takes longer. :< But, in the grand scheme of things, its not really a big delay. Just not as aggressive a schedule as I had originally hoped...
On 17/07/16 19:47, Don Y wrote:
> On 7/10/2016 5:24 PM, Clifford Heath wrote: >> On 10/07/16 19:11, Don Y wrote: >>> ... handled from a central controller... >> Why should the control be centralised? I can think of many >> situations where you need different things controlled from >> different places. > My original quote: > "This is important as it shows how the multiple displays, etc. > can be handled from a central controller without ever burdening > any individual task with knowledge of more than a *single* > display, etc. > > You appear to be conflating "control" with "controller".
No. You're just misinterpreting my intention.
> The control *algorithm* resides in a single CPU. Why shouldn't it?
Why should it? With my example (home A/V gear), individual components can be added or replaced at any time. If each one is designed to know its role and advertise its features, it shouldn't require reconfiguring anything else when it is added or replaced. I don't class that as "very little gain". It's exactly the behaviour I expect and have (so far) never seen.
> And, the "displays, etc." can be HANDLED from that (single) controller.
I don't want a controller to "handle" the display on my remote or on my phone. I just want it to accept commands, and to either comply or indicate why it can't.
> No mention of how many or where they are located -- just that they are > "multiple" and *controlled* (i.e., DRIVEN) from the central controller.
Unacceptable to me in the case I've described. I don't have to install a new cable tray to add an audio device to a system. My guests' devices automatically become an ephemeral part of the system. Very different from the system you're describing, but I have close to zero experience of those.
> ...to figure out how to > resolve the problem -- usually without halting the process! > [Doing so can cost you 4-8 hours of production;
One plant where my software was installed (a system for updating the software on robot controllers) runs at a downtime cost in six digits (Euro's) per *minute*. I had no direct involvement with the implementation though, but needless to say, they were pretty keen to ensure continuous operation!
> Just export the framebuffer as a resource.
And when you upgrade the display for a bigger one? Reorganise all the display code to handle multiple display sizes. or just waste the capability of the better ones?
> Likewise, export an interface to any "user input" devices
The key idea here is "presentation/semantic independence." A controller should expose the semantic data, and a display application should be able to decide how to display it. We did this when our banking clients in Europe built online Internet banking applications that talked to the exact same back-end applications that bank tellers were already using. Obviously a very different presentation was required, but the semantics of the transactions was identical. Because the back-end applications had been designed to use structured messages tailored to the semantics of each kind of transaction, it was simple to add new presentation applications. These were the first widely-used Internet banking applications in Europe, before web browser security was adequate to the task.
> Because the *controller* understand the users, it can opt to > paint informative messages on the remote display
In what language? The banking transactions are language-independent, so the multi-lingual aspects were limited to the presentation applications. It would have been a big pain to make the core systems multi-lingual.
> Display servers have been around for 35+ years.
If you're thinking of X terminals, they were a bad idea from the start. Exporting a frame buffer and drawing primitives across a network, without exporting a way to export code to run composite objects (including low-level input handling), relies on network speed and latency always being good enough (at a suitable price point). Sun tried to address this with NeWS, but we actually solved it with OpenUI. This code also ran NASDAQ for a decade. 10,000 trader workstations, growing to 45,000, with rapid screen updates (initially bursting at 10/second, growing to 100x that). Tell me again why it would have been sensible for the central transaction server to transmit *every pixel* to be seen by *every trader*...? Obviously, not all the screens were the same size, nor did every trader want the same layout. The central server doesn't even need to see offers - only accepted bids.
> Language support for IPC/RPC/RMI without all the low-level related > crud (i.e., "channel <- message" instead of "manually" setting > up a connection/socket, resolving addresses/ports, crafting an IDL > and IDL compiler, marshalling arguments, etc.) > All *OLD* technologies!
RPC remains a terrible idea. This kind of app needs asynchronous symmetrical structured messaging, so you never have to wait for a specific response. RPC is an attempt to lock each program into a linear time-line with a single thread of execution.
>>> The biggest problem lies in the use of a central controller in the >>> implementation. >> >> So drop it. Allow devices to export their control interfaces >> (in a discoverable way, see below) and allow other devices >> (plural) to send commands to those. > > Doesn't make sense to have the remotes tell the system what they > are and how they are used. The *system* has been designed with > a specific set of I/O's and requirements in mind. The remotes > have been SELECTED to fulfill those needs.
In a modern A/V system, components come and go frequently, though some core components are fairly stable.
> If the system could "discover" an extra air handling unit, it > wouldn't know how to *use* it!
Each unit should be configured (at install time) to know its local role, and the controller should be flexible to control it with minimal additional configuration data.
> [You don't just "install" a node; you have to "introduce" it to > the CM subsystem so the system as a whole knows how it will be > using it]
Right. But if the node has already queried the network to know what it is connected to (or the installer has selected connections from the other nodes visible on the network), then the central controller will often be able to manage it with no further config.
> A colleague has suggested what might be a clever hack to address > the problem. But, I think it will require a change to the hardware. > Of course, the photoplots went off a week ago to the board house! > It sure would have been nice for the solution to have appeared > earlier! My error being the assumption that I'd "fix it in software" > without acknowledging that some things *can't* be fixed, there! > (e.g., hard to add a "motor" to a device just by tweeking the code!) > > I've spent the past couple days looking for cheap ways to patch > the existing design (so I don't lose the time that was spent > getting the boards). But, I suspect I will soon have to switch to > figuring out how to *modify* the design and layout some new artwork. > Then, update the specs (this past week's effort :< ) to reflect this.
Isn't that always the way? But you probably would have found some other reason for a board spin anyhow - it's just the iterative nature of development.
> (sigh) "The best laid plans..." Trying to get things done quickly > inevitably takes longer.
Hence the saying "more haste, less speed" :). Clifford Heath.
Hi Don - You might want to have a look at TIB/Rendezvous.
For several decades now this provides a namespace-driven
many-node publish-subscribe bus used to implement lots
of distributed applications (quotation and trading systems,
semiconductor fabs, etc, etc, etc). Facilities included
address your questions I think...

Hope that helps,
Best Regards, Dave
On 7/17/2016 7:38 PM, Clifford Heath wrote:
> On 17/07/16 19:47, Don Y wrote: >> On 7/10/2016 5:24 PM, Clifford Heath wrote: >>> On 10/07/16 19:11, Don Y wrote: >>>> ... handled from a central controller... >>> Why should the control be centralised? I can think of many >>> situations where you need different things controlled from >>> different places. >> My original quote: >> "This is important as it shows how the multiple displays, etc. >> can be handled from a central controller without ever burdening >> any individual task with knowledge of more than a *single* >> display, etc. >> >> You appear to be conflating "control" with "controller". > > No. You're just misinterpreting my intention.
No, you're misunderstanding my application!
>> The control *algorithm* resides in a single CPU. Why shouldn't it? > > Why should it? With my example (home A/V gear), individual > components can be added or replaced at any time. If each > one is designed to know its role and advertise its features, > it shouldn't require reconfiguring anything else when it is > added or replaced. I don't class that as "very little gain". > It's exactly the behaviour I expect and have (so far) never > seen.
I'm not designing an A/V system!
>> And, the "displays, etc." can be HANDLED from that (single) controller. > > I don't want a controller to "handle" the display on my remote > or on my phone. I just want it to accept commands, and to either > comply or indicate why it can't.
I'm not using phones! Nor do I want to put all that much smarts into a "display" that is effectively replacing a "check engine" light! (reread my previous "traditional implementation" description)
>> No mention of how many or where they are located -- just that they are >> "multiple" and *controlled* (i.e., DRIVEN) from the central controller. > > Unacceptable to me in the case I've described. I don't have to > install a new cable tray to add an audio device to a system. > My guests' devices automatically become an ephemeral part of > the system. Very different from the system you're describing, > but I have close to zero experience of those.
Great! For an A/V system. If I add another temperature sensor to a process, how will the process controller know how to interpret the temperature data that it provides? What if I power it off and move it to another, DIFFERENT process? How will it affect THAT process's operation? (e.g., smelting iron ore instead of refrigerating frozen foodstuffs) [If I took a loudspeaker from your A/V system and used it AS AN INPUT to *sense* vibrations, would you expect your system to magically know of this new, radically different, application?]
>> ...to figure out how to >> resolve the problem -- usually without halting the process! >> [Doing so can cost you 4-8 hours of production; > > One plant where my software was installed (a system for > updating the software on robot controllers) runs at a > downtime cost in six digits (Euro's) per *minute*. I had > no direct involvement with the implementation though, but > needless to say, they were pretty keen to ensure continuous > operation!
A tablet press can produce in excess of 500,000 tablets/hour (e.g., 200/second). Depending on the product being produced, that can be $2/sec (for something like aspirin) to $600K/*second* (yes, there are solid dose formulations that are THAT expensive) in lost "retail opportunity". If a "bad" tablet finds its way into the BARREL that collects finished product, that 5ms event costs you the day's production (even commonplace medications get pricey when dealing with *barrels* full!)
>> Just export the framebuffer as a resource. > > And when you upgrade the display for a bigger one? Reorganise > all the display code to handle multiple display sizes. or > just waste the capability of the better ones?
You're not thinking very creatively: /framebuffer /resolution /depth /pitch /rate ... And, as it is an ACTIVE object, you can effectively play "what if" scenarios: "if I SET resolution to '800x600', what 'depth' choices does it provide?" Of course, there's a limit to the geometries and capabilities that you'd want to support; RS170 video in a 4:3 aspect is a poor fit to a 600x20 (!) display.
>> Likewise, export an interface to any "user input" devices > > The key idea here is "presentation/semantic independence." > A controller should expose the semantic data, and a display > application should be able to decide how to display it. > > We did this when our banking clients in Europe built online > Internet banking applications that talked to the exact same > back-end applications that bank tellers were already using. > Obviously a very different presentation was required, but > the semantics of the transactions was identical. Because the > back-end applications had been designed to use structured > messages tailored to the semantics of each kind of transaction, > it was simple to add new presentation applications. These > were the first widely-used Internet banking applications in > Europe, before web browser security was adequate to the task.
Can they look at the state of a particular digital I/O pin *in* the "display controller"? Can they tell you the current power supply voltage for the display controller?
>> Because the *controller* understand the users, it can opt to >> paint informative messages on the remote display > > In what language? The banking transactions are language-independent, > so the multi-lingual aspects were limited to the presentation > applications. It would have been a big pain to make the core > systems multi-lingual.
We're not presenting tomes of information. Rather, annotating "readouts" and "controls". L10N is relatively easy (assuming you have a predefined user base for a particular L10N; you wouldn't support Faroese until you got an *order* for such a system!). And, as all of this is *in* the controller, the "remote displays" automagically get this capability -- without ever having to understand the character set or presentation characteristics of the language involved! Just like temperature sensors and motors don't have to be concerned with what they are measuring or affecting!
>> Display servers have been around for 35+ years. > > If you're thinking of X terminals, they were a bad idea from the > start. Exporting a frame buffer and drawing primitives across a > network, without exporting a way to export code to run composite > objects (including low-level input handling), relies on network > speed and latency always being good enough (at a suitable price > point). Sun tried to address this with NeWS, but we actually > solved it with OpenUI. > > This code also ran NASDAQ for a decade. 10,000 trader workstations, > growing to 45,000, with rapid screen updates (initially bursting > at 10/second, growing to 100x that). Tell me again why it would > have been sensible for the central transaction server to transmit > *every pixel* to be seen by *every trader*...? Obviously, not > all the screens were the same size, nor did every trader want > the same layout. > > The central server doesn't even need to see offers - only accepted > bids.
Now, cut the cost of those "terminals" to JUST that of the hardware required to run the display -- without supporting your "display objects". Hmmm... solution doesn't work! Because you don't need to repaint "check engine lights" at 10Hz! There's nothing wrong with exporting a frame buffer. You just have to know the needs of the application that will *use* that frame buffer! You don't need a fancy OS to handle the "richer" tasks running in the system; the "communication interface" (ethernet/wireless stack, CANbus, etc.) is the most bloated piece of code in the box! E.g., I use X Windows terminals in my day to day activities. I could strip the terminal down to a *literal* framebuffer and still get admirable performance. It doesn't cost much to paint a glyph into a window every time I strike a key! N.B. a Pano Box doesn't even have "firm/software" in the "terminal"! Lousy if you're trying to display full-motion video. But, for largely static/text displays, it's MORE than you need! And, doesn't care if you're displaying text, graphics, Swahili, etc.
>> Language support for IPC/RPC/RMI without all the low-level related >> crud (i.e., "channel <- message" instead of "manually" setting >> up a connection/socket, resolving addresses/ports, crafting an IDL >> and IDL compiler, marshalling arguments, etc.) >> All *OLD* technologies! > > RPC remains a terrible idea. This kind of app needs asynchronous > symmetrical structured messaging, so you never have to wait for a > specific response. RPC is an attempt to lock each program into a > linear time-line with a single thread of execution.
Asynchronous programming is the undoing of most programmers. Folks who think dealing with reentrancy and multithreaded applications was "a new idea" would be baffled by: "The service you requested some time ago was unable to complete, for the following reason. I'll leave it to you to figure out WHICH request I'm talking about and how to recover from that..."
>>>> The biggest problem lies in the use of a central controller in the >>>> implementation. >>> >>> So drop it. Allow devices to export their control interfaces >>> (in a discoverable way, see below) and allow other devices >>> (plural) to send commands to those. >> >> Doesn't make sense to have the remotes tell the system what they >> are and how they are used. The *system* has been designed with >> a specific set of I/O's and requirements in mind. The remotes >> have been SELECTED to fulfill those needs. > > In a modern A/V system, components come and go frequently, > though some core components are fairly stable.
In a process control application, components come and go *rarely* -- because additions and deletions affect the control ALGORITHMS: "Hmmmm... I see I've got an EXTRA temperature sensor! How should I make use of the data that it presents? Should I assume it represents an opportunity for yet another cascaded control loop? If so, which?" Or: "Gee, now there are TWO heating units! I guess there must be a reason for that. But, which one do I pick as the effector in this control loop?"
>> If the system could "discover" an extra air handling unit, it >> wouldn't know how to *use* it! > > Each unit should be configured (at install time) to know its > local role, and the controller should be flexible to control > it with minimal additional configuration data.
So, the application should migrate into the sensor/actuator? And, this will somehow make the application easier to design and maintain? "Hi, you're a temperature sensor. The range of temperatures that you should expect to encounter is [X,Y]. You should never expect to see temperature changes in excess of D/t. Your current state should be reportable to the following clients: ... Anyone else inquiring would be an indication of a flaw in their implementation (so you should throw an alarm). I know this sounds like a lot, but, tomorrow, you'll serve a different role in this same physical installation. I'll tell you about it, later..." Traditionally, *the* CPU fetched a binary value from a "converter" (voltage/current-to-digital, current to frequency, etc.) over a local "I/O bus" and massaged that into a value that made sense to its algorithms (and "readouts"). I'm just inserting a VERY LONG RANGE "bus extender" in the implementation. And, ensuring that *nothing* changes -- no need for developers to understand message passing, network protocols, etc.
>> [You don't just "install" a node; you have to "introduce" it to >> the CM subsystem so the system as a whole knows how it will be >> using it] > > Right. But if the node has already queried the network to know > what it is connected to (or the installer has selected connections > from the other nodes visible on the network), then the central > controller will often be able to manage it with no further config.
You can't get "meaning" from "connections". Only the application can impart meaning to the individual components in a process/system. E.g., in film coating (i.e., putting the "skin" on a tablet, seed, etc. -- think M&M's), there are temperature and moisture sensors in abundance. Lots of control loops to regulate the characteristics of the input air, process air, pan speed, bed temperature, etc. Yet, what you are REALLY controlling is the moisture content in the *exhaust* from the process (because you are trying to evaporate the liquid from the coating material as it dries onto the tablets, seeds, etc.). Change inlet air moisture content, temperature, or application rate of coating solution, pan speed, etc. and the coating solution evaporates at a different rate (or, doesn't apply uniformly). So, the exhaust characteristics are reflected back to the input control loop THROUGH the process loop. No individual sensor or actuator can clearly state its role in the process; they all exist as parts of a gestalt. Something else (i.e., the controller) knows of their interrelationships. The process *designer* decides when additional sensors or effectors are needed and *how* they will tie into the control system/control algorithms, etc. If I gave you a box of temperature sensors, moisture sensors, dehumidification coils, motors, heaters, etc., how would "connecting them together" magically imbue *them* with knowledge of their particular application? The fact that a temperature sensor can report temperature in degrees K has virtually NO value -- beyond reporting raw binary data from its hardware interface. It doesn't "buy" the application anything. In fact, makes it harder as now you have to export a "calibration interface" so the controller/UI can get/store calibration data for that "device" *in* the device. The big win comes in removing the miles of cable that connect the I/O's to the data acquisition/control hardware. Esp if you can do so inexpensively!
>> A colleague has suggested what might be a clever hack to address >> the problem. But, I think it will require a change to the hardware. >> Of course, the photoplots went off a week ago to the board house! >> It sure would have been nice for the solution to have appeared >> earlier! My error being the assumption that I'd "fix it in software" >> without acknowledging that some things *can't* be fixed, there! >> (e.g., hard to add a "motor" to a device just by tweeking the code!) >> >> I've spent the past couple days looking for cheap ways to patch >> the existing design (so I don't lose the time that was spent >> getting the boards). But, I suspect I will soon have to switch to >> figuring out how to *modify* the design and layout some new artwork. >> Then, update the specs (this past week's effort :< ) to reflect this. > > Isn't that always the way? But you probably would have found some > other reason for a board spin anyhow - it's just the iterative > nature of development.
No, this is a /pro bono/ job. I don't want to spend 20 hours if, instead, I can get by spending just 10 -- I've got other things clamoring for my time (big reward to the person who can demonstrate a successful human cloning process!) I was able to make the hack work with the existing boards. But, have had to add constraints to future implementations (which will probably be inconsequential). OTOH, it will save them a boatload of money (which they will probably piss away on something stupid; but, I've no say in how others run their affairs -- lest I want to give them a say in how I run *mine*!) AND gives me a beta site for some of the code I'm using in another project; lets me see how others adapt to working and using that environment!
>> (sigh) "The best laid plans..." Trying to get things done quickly >> inevitably takes longer. > > Hence the saying "more haste, less speed" :). > > Clifford Heath. >
On 7/20/2016 11:00 AM, Dave Nadler wrote:
> Hi Don - You might want to have a look at TIB/Rendezvous. > For several decades now this provides a namespace-driven > many-node publish-subscribe bus used to implement lots > of distributed applications (quotation and trading systems, > semiconductor fabs, etc, etc, etc). Facilities included > address your questions I think...
I'll admit to only giving a cursory examination of a "TIBCO Rendezvous Concepts" document so its possible I'm missing A LOT! :-/ But, it seems like all it really does is virtualize connections. I.e., frees up "clients" having to know where "services" are located. That;s not an issue in this application -- things are pretty static. It also seems to operate whiteboard style -- everything (can) see everything (else). In particular, I don't see how I can hide "Master.Power.Switch" so that only certain "tasks" can *see* its state -- and even fewer can *control* its current state. I.e., if I don't want a particular task to be able to turn power on/off, I (presently) simply don't put that "name" in the namespace that I give to that task (the namespace acts as a filter -- only names that exist in a task's namespace can be referenced IN ANY WAY. So, a task can't "synthesize" names to see if they *might* be legitimate.) Beyond that, I don't see how I can bind "Power.Switch" for task A to one particular piece of "hardware" while binding it to a completely different piece of "hardware" for task B (and, to NOTHING for task C). This system seems to expect everything to "cooperate" in a single space. (?) It doesn't help me partition/isolate that space into smaller units appropriate for the individual "tasks" in it.