论坛

非严格实时系统中的任务优先级

开始于 凝视 2020年1月3日
凝视 wrote:
> Il 03/01/2020 15:19, David Brown ha scritto:
<snop>
> > You're right, cooperative scheduling is better if I want to reuse the > functions used in superloop architecture (that is a cooperative scheduler). >
Preemptive scheduling probably causes more problems than it solves, over some problem domains. SFAIK, cooperative multitasking can be very close to fully deterministic, with interrupts being the part that's not quite deterministic. -- Les Cargill
克利福德·希思 wrote:
> On 5/1/20 11:58 am, 凝视 wrote:
<snip>
> > I wish I could, but it is actually a frightfully difficult subject. > Basically it's the same as thread-safe programming. > Only about 1% of programmers think they can do it. > Of those, only about 1% actually can. >
I could show anyone how in an afternoon. So I take your statement as being "only 1% of 1% have been forced to take that afternoon to learn it." I should qualify that - I could show anyone working on a classic architecture how. With multilevel caches 和 certain sorts of MMUs, there may be more to it. I'm thinking the WindRiver drivers course was about one week, which should about cover everything conceptually.
> It's the 0.99% that you have to worry about. At least some of them for > Toyota. Don't be one of them! > > 怎么样ever, this difficulty is precisely why Rust was created. Although I > haven't yet done a project in Rust, I've done enough multi-threaded work > in C++ to know that the ideas in Rust are a massive leap forwards, 和 > anyone doing this kind of work (especially professionally) owes it to > their users to learn it. >
I seriously doubt Rust represents some quantum leap here. <snip>
> > You need to understand about basic mutex operations, preferably also > semaphores, 和 beyond that to read 和 write barriers (if you want to > write lock-free code). It's a big subject. >
And that's about an afternoon, really. Not so much the barriers 和 bothering with lock-free. That may take a little more.
> 克利福德·希思.
-- Les Cargill
Paul Rubin wrote:
> 凝视 <pozzugno@gmail.com> writes: >> As I already wrote many times, I don't have experience with RTOS 和 >> task sync mechanism such as semaphores, locks, mutexes, message queues >> 和 so on. >> So I'm not able to understand when a sync is really needed. >> >> Could you point on a good simple material to study (online or book)? > > I have found it simplest to have tasks communicate by message passing, > the so-called "CSP model" (communicating sequential processes), rather > than fooling around with explicit locks. With locks you have to worry > about lock inversion 和 all kinds of other madness, 和 your main hope > of getting it right is formal methods, like Lamport used for the Paxos > algorithm. Message passing incurs some cpu overhead because of the > interprocess communication 和 context switches, but it gets rid of a > lot of ways things go wrong. > > If your RTOS supports message passing (look for "mailboxes" in the RTOS > docs) then I'd say use them. >
Mailboxes are just semaphores with extra steps :) When I've written mailboxes for use in user space, they usually use a semaphore. In kernel space, you're already under a "semaphore" ( but still subject to asynchronous interrupts ). This of course varies by O/S....
> The language most associated with CSP style is Erlang, which doesn't > really fit on small embedded devices, but Erlang materials might still > be a good place to learn about the style. Erlang inventor Joe > Armstrong's book might be a good place to start: >
Erlang is a fine system,. The ( arguably ) best thing it provides is the "actor pattern", which applies independent of language choice. <snip> -- Les Cargill
凝视 wrote:
> Il 03/01/2020 15:28, 尼克拉斯·霍尔斯蒂(Niklas Holsti) ha scritto:
<snip>
> > Yes, converting a blocking task in a non-blocking state-machine task can > be hard, but it's complex to write tasks in a preemption schuduler (you > need to know when to use locks, semaphores, mutexes 和 so on). > >
The question becomes - how important is determinism in your system? IMO, this can be expressed in economic terms - more determinism means fewer times the phone rings. <snip> -- Les Cargill
On 2020-01-05 21:46, 莱斯·嘉吉 wrote:
> 凝视 wrote: >> >> Yes, converting a blocking task in a non-blocking state-machine task >> can be hard, but it's complex to write tasks in a preemption schuduler >> (you need to know when to use locks, semaphores, mutexes 和 so on).
I think that makes it sound harder than it is. Avoid directly shared variables when you can, using whatever message-passing tools the language or RTOS provides, or wrap all operations that access shared data in mutexes, 和 that will solve most problems. What remains is mainly the skill 和 experience to design the shared data structures so that the mutex-protected operations are short 和 snappy without introducing polling, race conditions, deadlocks or starvation. But that polish is needed only for systems where processing resources are tight, which is not often the case today.
> The question becomes - how important is determinism in your system? IMO, > this can be expressed in economic terms - more determinism means fewer > times the phone rings.
I've been implementing pre-emptive, priority-driven real-time systems, off 和 on, since the mid-80's 和 never has the phone rung because of the non-determinism. Yes, many customer write requirements saying that they want "simple, deterministic scheduling", 和 then write other requirements for which the only clean solution is a pre-emptive system. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 6/1/20 6:39 am, 莱斯·嘉吉 wrote:
> 克利福德·希思 wrote: >> On 5/1/20 11:58 am, 凝视 wrote: > <snip> >> >> I wish I could, but it is actually a frightfully difficult subject. >> Basically it's the same as thread-safe programming. >> Only about 1% of programmers think they can do it. >> Of those, only about 1% actually can. >> > > I could show anyone how in an afternoon. So I take your statement as > being "only 1% of 1% have been forced to take that afternoon to learn it." > > I should qualify that - I could show anyone working on a classic > architecture how. With multilevel caches 和 certain sorts of MMUs, > there may be more to it. > > I'm thinking the WindRiver drivers course was about one week, which > should about&#2013266080; cover everything conceptually. > >> It's the 0.99% that you have to worry about. At least some of them for >> Toyota. Don't be one of them! >> >> 怎么样ever, this difficulty is precisely why Rust was created. Although I >> haven't yet done a project in Rust, I've done enough multi-threaded >> work in C++ to know that the ideas in Rust are a massive leap >> forwards, 和 anyone doing this kind of work (especially >> professionally) owes it to their users to learn it. >> > > I seriously doubt Rust represents some quantum leap here. > > <snip> >> >> You need to understand about basic mutex operations, preferably also >> semaphores, 和 beyond that to read 和 write barriers (if you want to >> write lock-free code). It's a big subject. >> > > And that's about an afternoon, really. Not so much the barriers 和 > bothering with lock-free. That may take a little more.
Behold a 0.99%-er! :)
On 1/5/20 2:32 PM, 莱斯·嘉吉 wrote:
> 凝视 wrote: >> Il 03/01/2020 15:19, David Brown ha scritto: > <snop> >> >> You're right, cooperative scheduling is better if I want to reuse the >> functions used in superloop architecture (that is a cooperative >> scheduler). >> > > Preemptive scheduling probably causes more problems than it solves, over > some problem domains. SFAIK, cooperative multitasking can be very close > to fully deterministic, with interrupts being the part that's not quite > deterministic. >
Preemptive scheduling solves a lot of serious issues when there are significant Real-Time requirements, as without it, every task needs to at least check for a possible task switch often enough to allow the tight real-time operations to complete on time. Yes, if that operation can be done COMPLETELY is the hardware ISR, then other operations don't need to worry about them as they are just interrupts. It isn't that uncommon for these sorts of operations to need resources that mean they can't just complete in an ISR. Which is better for a give set of tasks if very dependent on those tasks, 和 the skill set of the programmer(s). I tend to find for the problems I personally run into, preemption works well.
On 1/5/2020 12:32 PM, 莱斯·嘉吉 wrote:
> 凝视 wrote: >> Il 03/01/2020 15:19, David Brown ha scritto: > <snop> >> >> You're right, cooperative scheduling is better if I want to reuse the >> functions used in superloop architecture (that is a cooperative scheduler). > > Preemptive scheduling probably causes more problems than it solves, over some > problem domains. SFAIK, cooperative multitasking can be very close to fully > deterministic, with interrupts being the part that's not quite deterministic.
Preemptive frameworks can be implemented in a variety of ways. It need NOT mean that the processor can be pulled out from under your feet at any "random" time. Preemption happens whenever the scheduler is invoked. In a system with a time-driven scheduler, then the possibility of the processor being rescheduled at any time exists -- whenever the jiffy dictates. However, you can also design preemptive frameworks where the scheduler is NOT tied to the jiffy. In those cases, preemption can only occur when "something" that changes the state of the run queue transpires. So, barring "events" signalled by an ISR, you can conceivably execute code inside a single task for DAYS 和 never lose control of the processor. OTOH, you could end up losing control of the processor some epsilon after acquiring it -- if you happen to do something that causes the scheduler to run. E.g., raising an event, sending a message, changing the priority of some task, etc. In each of these instances, a preemptive framework will reexamine the candidates in the run queue and possibly transfer control to some OTHER "task" that it deems more deserving of the processor than yourself. process(); // something that takes a REALLY LONG time raise_event(PROCESSING_DONE); In the above, process() can proceed undisturbed (subject to the ISR caveat mentioned above), monopolizing the processor for as long as it takes. There will be no need for synchronization primitives within process() -- because nothing else can access the resources that it is using! *If* a task "of higher priority" (ick) is ready 和 waiting for the PROCESSING_DONE event, then the raise_event() call will result in THAT task gaining control of the processor. To the task that had done this process()ing, the raise_event() call will just seem to take a longer time than usual! It's easy to see how a time-driven mechanism is added to such a system: you just treat the jiffy as a significant event and let the scheduler reevaluate the run queue when it is signaled. I.e., every task in the run queue is effectively waiting on the JIFFY_OCCURRED event. (i.e., the jiffy becomes "just another source of events" that can cause the run queue to be reexamined) It's easy to see how you can get the same benefits of cooperative multitasking with this preemptive approach without having to litter the code with "yield()" invocations. This leads to more readable code AND avoids the race/synchronization issues that time-driven preemption brings about. The developer does have to be aware that any OS call can result in a reschedule(), though!
克利福德·希思 wrote:
> On 6/1/20 6:39 am, 莱斯·嘉吉 wrote: >> 克利福德·希思 wrote: >>> On 5/1/20 11:58 am, 凝视 wrote: >> <snip> >>> >>> I wish I could, but it is actually a frightfully difficult subject. >>> Basically it's the same as thread-safe programming. >>> Only about 1% of programmers think they can do it. >>> Of those, only about 1% actually can. >>> >> >> I could show anyone how in an afternoon. So I take your statement as >> being "only 1% of 1% have been forced to take that afternoon to learn >> it." >> >> I should qualify that - I could show anyone working on a classic >> architecture how. With multilevel caches 和 certain sorts of MMUs, >> there may be more to it. >> >> I'm thinking the WindRiver drivers course was about one week, which >> should about&#2013266080; cover everything conceptually. >> >>> It's the 0.99% that you have to worry about. At least some of them >>> for Toyota. Don't be one of them! >>> >>> 怎么样ever, this difficulty is precisely why Rust was created. Although >>> I haven't yet done a project in Rust, I've done enough multi-threaded >>> work in C++ to know that the ideas in Rust are a massive leap >>> forwards, 和 anyone doing this kind of work (especially >>> professionally) owes it to their users to learn it. >>> >> >> I seriously doubt Rust represents some quantum leap here. >> >> <snip> >>> >>> You need to understand about basic mutex operations, preferably also >>> semaphores, 和 beyond that to read 和 write barriers (if you want >>> to write lock-free code). It's a big subject. >>> >> >> And that's about an afternoon, really. Not so much the barriers 和 >> bothering with lock-free. That may take a little more. > > Behold a 0.99%-er! > > :)
We can larf, but I think there's less to serialization than is made of it. -- Les Cargill
理查德·达蒙(Richard Damon) wrote:
> On 1/5/20 2:32 PM, 莱斯·嘉吉 wrote: >> 凝视 wrote: >>> Il 03/01/2020 15:19, David Brown ha scritto: >> <snop> >>> >>> You're right, cooperative scheduling is better if I want to reuse the >>> functions used in superloop architecture (that is a cooperative >>> scheduler). >>> >> >> Preemptive scheduling probably causes more problems than it solves, >> over some problem domains. SFAIK, cooperative multitasking can be very >> close to fully deterministic, with interrupts being the part that's >> not quite deterministic. >> > > Preemptive scheduling solves a lot of serious issues when there are > significant Real-Time requirements,
I don't know how to square the two things being said in that sentence fragment. Preemptive is inherently less deterministic than cooperative.
> as without it, every task needs to > at least check for a possible task switch often enough to allow the > tight real-time operations to complete on time.
Yes. You need to conform to some granularity in time.
> Yes, if that operation > can be done COMPLETELY is the hardware ISR,
Not so much...
> then other operations don't > need to worry about them as they are just interrupts. It isn't that > uncommon for these sorts of operations to need resources that mean they > can't just complete in an ISR. >
This isn't about interrupts; it's about chunking the mainline processing into phrases. After each phrase the thread can block.
> Which is better for a give set of tasks if very dependent on those > tasks, 和 the skill set of the programmer(s). I tend to find for the > problems I personally run into, preemption works well.
Preemption is often the default state now, 和 people simply get used to it. -- Les Cargill