Trying to align something you can't see with something else that you can't see is going to be very difficult at best...
...
I know there are many here who use the motion sensors reliably, but you'll probably find that their requirements tend to be a little more vague than the exact critical sensing conditions that you're trying to achieve .
My own experience confirms that this scenario is far from ideal... unfortunately, it is what I have, and my only real options are to either give up entirely, or try to wring whatever functionality I can out of what I have.
Just to clarify the picture a bit, I'm not using ****Eye motion sensors, but rather the Floodlight motion sensors (model PR511, IIRC). I don't know how the types compare, nor whether these sensors have the same characteristics as the ****Eye types, but I do know they differ operationally in a couple of ways.
By the way, I have also used three different
EagleEye sensors
indoors (although they are supposedly made for outdoor use). They generally work fairly well for controlling area lights (at least until their batteries start getting low, then all bets are off), but they are still noticably less "positive" in their responses when compared to non-X10 motion-sensing bulb sockets that I have.
Anyway, back to the issue of triggering surveillance cameras with the floodlight sensors that I have, I'm guessing a "tool" that would be helpful when trying to determine the areas "covered" by the motion sensors would be something that I could use that would
positively trigger the sensor from a given spot, regardless of any nuances of performance variations in the sensor. (In other words, something that would overcome any borderline sensitivity issues due to environment or whatever, and no fooling ensure a yes/no detection from a given spot in the area of interest).
What I'm hoping to accomplish is to positively map the edges of the sensor's horizontal (side-to-side) field of view - which I am
assuming to be more of a "mechanical" limit and thus unaffected by performance variations due to current environmental factors. (If this assumption is invalid, then the whole exercise is moot).
The end result that I seek is to be able to tell which camera to turn on when any given sensor triggers - as opposed to the vaguery that occurs when a sensor can detect areas covered by more than one camera, and thus it becomes impossible to determine which camera to turn on. Again, my
assumption is that it should be viable to mask or orient the sensors so that they also can "see" only an approximation of the 60-degree field of view that the cameras see - and will ignore motion that occurs outside of such regions.
To me, this is a different problem than what has been discussed here - that is, my interest is in
limiting detections to only certain areas, rather than addressing the concern about whether or not detections
will actually occur within such areas, depending on environmental factors, phase of the moon, or whatever... In other words, I'm not trying to deal with ensuring that "real" detections occur within the view of the sensors, but rather to
limiting the area that they can react to. Unfortunately, this still presents the need to ensure detections in order to map the boundaries... hence my original question of whether there are any good tools or techniques that would facilitate this boundary mapping, given the variability of the sensors' performance. My thought was that some readily-detectable IR source could be moved from the sides of the detection area towards the middle until the sensor triggered, thus determining the edge of view. Unfortunately, I have not come up with any IR source that seems to produce any consistent results... so either my trigger source is not adequate, or else my procedure is bogus somehow... so I'm hoping someone may have some better ideas.