Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Event Order ...
OK, assuming the worker threads do it and the main thread does nothing but dump stuff to the main queue, just to think through the permutations...

1. A new event that is of a type that is currently running can only be seen (in the main queue) by a thread that is not currently running an event (which also means an event of that type) i.e. it has to be a free thread in order to start pulling stuff out of the main queue.
2. I.e. if there's a thread running an event of type X, and he goes to a 'serialized wait queue' to see if there's another of that type there, the only way one could have gotten there is that other threads have emptied out the queue down to that point and are now processing any that were before that second X, which does basically insure correct order.
3. So, if a that thread running an event of type X completes that one, he can be sure that any X event in the serialized queue is older than the one at the top of the main queue, and hence would even under normal circumstances be processed before it.
4. That doesn't guaranteed that it's older than any others in the serialized queue, but that's just the way it is for serialized ones. They are only guaranteed to run in the order that ones of that type are received.
5. #1 also sort of guarantees that we can't see the pathological scenario where non-serialized ones aren't being processed, because nothing else can get into the serialized queue until threads are freed up to start pulling stuff out of the main queue again. Therefore, even if they were all processing serialized event types at some point, they would run out of serialized fodder and go back to the main queue again.

P.S. If a thread is running an event of type X and he completes that. He doesn't have to start another of type X immediately, even if one is available. All that is important is that HE NOT RUN two at the same time. However, given #1, this doesn't seem to be a useful factoid. I.e. we could do the 'assign a running counter to every event put into the main queue' thing. Then, if a thread finished an event of type X, and even if he saw one in the serialized queue of type X, he could see if that should be run next or the one in the main queue.

But, since the worker threads themselves are pushing things into the serialized queue, #1 comes into play and the one at the top of the main queue has to be behind the next X in the serialized queue. If the main thread was doing the distribution into the two different threads, then this scheme could be useful. But not required if the workers are.

I think that those points define the issues involved, and your scenario should work correctly.
Dean Roddey
Explorans limites defectum
On the serialize all of them thing, you are sort of assuming that a events of a given type all come from one place. In a multi-user system, we could get events of a given type coming from two or more users in different rooms, and the action is going to react to info in the event itself and do room specific stuff. We have no way to know that, and so wouldn't want to serialize them, since they are really unrelated. User A may end up waiting 5 seconds for user B's action to complete, when in fact there was no reason to do that.

I think that only the designer of the system can really know for sure if all events passing a given filter are really related.

It could be the default for newly created events of course.
Dean Roddey
Explorans limites defectum
Yeah, I concur.  It seems fair that the person that presses the button first gets executed first, and that will be guaranteed by serialization.  However, if the action was written to support parallelism, there is no reason not to run both events simultaneously, thus improving overall system response time.

However, if you take the counter example I provided earlier: ReadField, Add 1; WriteField.  You'll get inaccuracies (action not written to support parallel execution).  If the counter fieldname is generated by part of the event - say "Room".  There is no need to serialize as long as events come from different rooms, but if two events come from the same room, you'll have inaccuracies again.  You can solve it by serializing a single trigger (and suffer possible response time problems), *or* by creating a separate trigger for each room (filtered by room), and serializing those triggers.  Then two people in different rooms run quickly in parallel, and two events from the same room serialize as needed for accurate results.

Providing a flag gives the ability to have accuracy and performance...  Very cool -- Bob
You can also do some basic synchronization. Check the TestAndSet command of the variables target.
Dean Roddey
Explorans limites defectum
So there's still a major race condition issue. Looking back on it, I think this was why I was pushing before for the main queuing thread to handle this, and just live with the 'queue jumping' issue.

The worker threads block on the main queue. That cannot be a normal locking type of call. They all just get blocked on a wait list inside the queue, and get woken up when objects show up in the queue. It's an atomic 'get', but once they have gotten theirs, it's not anymore. That means that they have no means to return from the get, lock, check the event, see if it is a serialized triggered event, and add it to the active triggered events list, before another thread might get another of the same type and beat it to the lock.

Of course one will get the second lock first. But the problem is neither of knows which of them got the earlier event. So it could still allow out of order invocation of the events of that type. We aren't all that worried about slightly out of order processing normally, but in these serialized ones, they are the ones we really need to make sure they are in order.

I could (as previously mentioned) put a running sequence number into each of them, but these two competing threads would have no way of comparing notes to see who has the lower numbered one (and hence who wins and who loses.) That would be super-messy to deal with.

Thinking about it more, even having the main queuing thread separate them into regular and serialized queues wouldn't do any good, either. It would still have the same above issue. It's amazing how stupidly complex this seemingly fairly work-a-day task actually is.

It may be that there has to be a 'dispatching' thread sitting between the ones that are adding to the main queue, and the workers. The worker threads wouldn't block on the queue, the dispatching thread would. If we did that, then each thread could have his own internal queue and the dispatching thread could distribute events to them. Each thread would have an event, a queue, and a member that indicates the path of the event type he is working on.

The dispatching thread can easily find those that are not active (no event type set) and just hand out events to them, which would work pretty much like it does now. If he pulls out a serialized one, it can just ask each thread if they are working on this event. If one is, he returns true and that he took it and added it to his queue. If no one currently is, or if it completed while this check was happening, then just give it to any available thread. We know no new thread can grab one while this is going on, because the worker threads don't grab anymore, they are given to. In that sense it sort of reduces synchronization requirements since the worker threads never compete with each other.

OTOH, for efficiency (and to keep the complexity from ever declining) there has to be an event that each thread posts when it has completed its work and is about to go back to blocking on his event. That deals with the issue where all threads are busy so the dispatching thread has to block and wait for available space. If all threads are busy, it blocks on the wait for space event. When a thread completes it posts the event which wakes up the dispatching thread. When the dispatching thread sees all threads busy, it sets the event and blocks on it.

Management of the space available event has to be synchronized as well, along with his cleaning up his information that indicates he is done processing. Each worker thread wouldn't have a separate mutex or thread safe queue, but should share one mutex with the dispatching thread. That would allow for sync of the space available event and checking for whether one is currently processing a particular event type.

When a thread is done, it locks that shared mutex, clears its current event path, posts the space available event, unlocks and exits. The dispatching thread, if it sees all threads busy, locks the mutex, does another check to make sure they are all still busy, and if so, resets the space available event, unlocks and blocks on the event. So not too bad. When checking to see if any thread is processing a serialized event of type X, it locks the mutex, and just calls each active one. They cannot clear their current event type during this even if they actually complete during this check, because the mutex is locked and they get stuck on the way out.

The threads will have to do that final lock and cleanup within the context of their thread loop, because they could always find out they have been given another and need to go back around again.

If the dispatching thread sees that a worker thread is done, then it knows that thread is blocked on his waiting event. So he can just load a new event into the thread's queue (which will set the thread's current event path member since it's a first one loaded), then pulses the thread's wait event to wake the thread up. So that is all easily atomic.

There may be some much more magical solution that makes all this just happen naturally, but I can't see what that would be yet.
Dean Roddey
Explorans limites defectum
I can't imagine that there is a CQC system with 1000's of triggers so why not make it dirt simple and have a worker thread per event and push events into the respective queue followed by waking the thread?
Mark Stega
I could (as previously mentioned) put a running sequence number into each of them, but these two competing threads would have no way of comparing notes to see who has the lower numbered one (and hence who wins and who loses.) That would be super-messy to deal with.

Actually, this is quite simple.  Just have two global variables per trigger.  One is "next event number" and the other is "current event number".  The main thread increments next event number and puts it in the main queue with the event and the trigger type.  When a worker grabs an event/trigger/number from the queue it doesn't have to talk to the other threads, it just looks at the global "current event number" for this trigger.  If the number matches the one from the queue, execute and then increment the global.  If it doesn't match, just wait until it does.

Since you have good support for writing thread-aware macros already (very cool BTW), it seems that serialized macros could be very rare.  Maybe a thread for each "serialized" trigger is not unreasonable.

I'm already well into the above discussed changes. I'm going to move forward with that. It's not too bad. And I'll make some useful improvements to the code in the process. It's been a while since this stuff has been visited.
Dean Roddey
Explorans limites defectum
One simplification, I don't really need to do the 'wait for space' event thing for the dispatch thread. I'll just do a simple 'back off' loop, to wait for a thread to become available. It would only be a wee bit slower and less efficient and a lot simpler.

Or actually, I can use a semaphore. That's something I seldom think about, but it fits perfectly for this. Each worker thread enters it when it starts processing an event, and exits it when it has no more to process. Set the max semaphore count to the number of worker threads. So the dispatch thread can just block on entering it. As soon as a thread becomes inactive and exits the semaphore, the dispatch thread wakes up. He exits it to clear up that entry, and now he can check for a new target thread.
Dean Roddey
Explorans limites defectum
I have created a reproducible test case for this problem.

My water meter sends me a pulse count every 500ms which is always increasing.  I created a trigger which updates a global variable with the current pulse count.  If the pulse count ever goes backwards I log a msg that I have been called out of sequence.

This did not result in any out of sequence calls no matter how much water I wasted trying to generate the error.  I then modified my trigger to randomly have a 1000ms Pause before testing the pulse count, and I was able to get it to fail.  Larger pause times cause more frequent failures. It should fail with a delay of 500ms, but I have not seen it yet.

My CPU has 4cores/8threads and is very lightly loaded, so probably less likely to exhibit the problem as well.

If Not








   P3=Edges out of Sequece: %(TEvRTV:NewFldVal) processeed after %(GVar:WaterMeterEdges)




Possibly Related Threads...
Thread Author Replies Views Last Post
  Triggered Event Server not triggering EST 52 6,052 08-10-2018, 11:23 AM
Last Post: Dean Roddey
  Triggered Event Questions RichardU 3 888 07-14-2018, 03:53 PM
Last Post: Dean Roddey
  Triggered event w/ multiple triggers lleo 16 2,269 06-06-2018, 06:21 PM
Last Post: lleo
  Event entry not responding! Deane Johnson 4 1,311 12-11-2017, 02:10 PM
Last Post: Dean Roddey
  Writing to an external file from trigger event Ira 21 5,452 11-13-2017, 02:08 PM
Last Post: znelbok
  Motion Event Running Twice? zra 2 1,470 07-31-2017, 08:15 PM
Last Post: zra
  EVent Server Monitor znelbok 1 1,260 06-07-2017, 12:36 PM
Last Post: Dean Roddey
  I have an event that I want to happen 5 hours after sunset. ghurty 8 3,407 03-24-2017, 08:40 AM
Last Post: Dean Roddey
  Scheduled Event - Yearly Basis Bugman 2 1,412 11-07-2016, 04:44 PM
Last Post: Dean Roddey
  Vizia RF scene controller event traps? IVB 64 6,954 08-16-2016, 01:58 PM
Last Post: IVB

Forum Jump:

Users browsing this thread: 1 Guest(s)