Forums

cortex-m上的g ++没有动态内存

开始了 戴夫纳德勒 2016年11月5日
On 07/11/16 22:11, Don Y wrote:
> On 11/7/2016 1:56 PM, upsidedown@downunder.com wrote: >> On Mon, 7 Nov 2016 02:55:30 +0000 (UTC), antispam@math.uni.wroc.pl >> wrote: >> >>> Dave Nadler <drn@nadler.com> wrote: >>>> For years *one* of the *many* ways we've made reliable C++ >>>> embedded is by forbidding dynamic storage (except heap). >>> ^^^^ >>> You mean stack? Allocating memory at startup is reasonably >>> safe: each time you allocate the same amount so can easily >>> test if you stay within limit. Such allocation can be >>> useful for static constructors in library functions. >> >> Slightly off topic, why are C-programs so keen of using heap >> (malloc/free) instead of allocating temporary table variables >> (malloca) on stack as in e.g. Pascal ?
Pascal and C both use the heap and the stack for the same purpose - there is no significant difference. Local variables are kept on the stack, allocated memory is on the heap. Allocating on the stack is usually a good deal faster, but the memory is deallocated whenever you leave the function that allocated it, and on some targets there may be more restrictions on the stack space than heap space. So stack allocation is great for short-lived dynamic memory, especially if it is small, but you can't hold on to the data after the function returns. Also, there is no standard C stack space allocation function. Many (most?) C libraries have alloca(), which does the job - but it is not part of the standard. And while overloading the heap gives clear errors (malloc returns 0), overloading the stack is just undefined behaviour. VLAs give you a standardised way to allocate space on the stack in C. There is no standard way to do it in C++, though many compilers (such as gcc) support both VLAs and alloca() in C++. Of course, if you know the size you want at compile-time, you can freely create whatever local variables you want on the stack. It is a good while since I have programmed in Pascal - I can't remember any way of getting stack memory other than local variables whose sizes are known at compile time.
>> >> Even some FORTRAN IV compilers supported something like >> >> SUBROUTINE FOO (N,M) >> DIMENSION TEMP (N,M) >> >> In which the floating point two dimensional TEMP array on the >> subroutine stack had different size, depending on which parameters >> were used to call FOO. >> >> Did this have something to do with the situation that some of the >> early processors used in C implementation did not support stack (or >> index register) relative addressing and only supported some kind of >> return address stack ? > > IIRC, variable length *auto* arrays were not supported prior to C99.
Correct - VLA's were standardised in C99. (They were a gcc extension before that, and alloca() was commonly implemented.)
> > There is also the issue of whether the array's size is part of > its *type* -- or *value*. (and, in the latter case, if it is > accessible as such!) >
If the size of an array is known at compile time, then it's size is part of its type (to the extent that C distinguishes between different array sizes as types). If it is a VLA, then the size is not fixed at compile time (though it might be known by the compiler through optimisation, or just because the VLA has constant size. "const int n = 10; xs[n];" is a VLA in C, and an ordinary array in C++). The compiler knows the size, however, because it needed the size when creating the VLA - thus "sizeof" works fine on VLAs. This is also the one situation where sizeof may evaluate its arguments for side-effects - but if that makes any difference to your program, you have really messed things up! When you pass an array as a parameter to a function, however, the parameter is merely a pointer to the first element of the array - you lose all size information regardless of how the array was defined. (C++ std::array<> types are different.)
On Tue, 08 Nov 2016 10:37:51 +0100, David Brown
<david.brown@hesbynett.no> wrote:

>On 07/11/16 22:11, Don Y wrote: >> On 11/7/2016 1:56 PM, upsidedown@downunder.com wrote: >>> On Mon, 7 Nov 2016 02:55:30 +0000 (UTC), antispam@math.uni.wroc.pl >>> wrote: >>> >>>> Dave Nadler <drn@nadler.com> wrote: >>>>> For years *one* of the *many* ways we've made reliable C++ >>>>> embedded is by forbidding dynamic storage (except heap). >>>> ^^^^ >>>> You mean stack? Allocating memory at startup is reasonably >>>> safe: each time you allocate the same amount so can easily >>>> test if you stay within limit. Such allocation can be >>>> useful for static constructors in library functions. >>> >>> Slightly off topic, why are C-programs so keen of using heap >>> (malloc/free) instead of allocating temporary table variables >>> (malloca) on stack as in e.g. Pascal ? > >Pascal and C both use the heap and the stack for the same purpose - >there is no significant difference. Local variables are kept on the >stack, allocated memory is on the heap. Allocating on the stack is >usually a good deal faster, but the memory is deallocated whenever you >leave the function that allocated it, and on some targets there may be >more restrictions on the stack space than heap space. So stack >allocation is great for short-lived dynamic memory, especially if it is >small, but you can't hold on to the data after the function returns.
If that data needs a longer lifetime, allocate it at a higher level in the call stack and pass a pointer to the lower levels.
>Also, there is no standard C stack space allocation function. Many >(most?) C libraries have alloca(), which does the job - but it is not >part of the standard. And while overloading the heap gives clear errors >(malloc returns 0), overloading the stack is just undefined behaviour.
Look at this issue from an application system upper level designer, what do you do if the malloc() returns NULL in some deep function ? What do you do ? The simplest case is to restart the program and hope that the problem go away ? However, if the system must continue to work in a sensible way even if malloc fails due to heap fragmentation, what do you do ?
On Tue, 8 Nov 2016 02:31:42 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 11/8/2016 1:57 AM, upsidedown@downunder.com wrote: >>>>> // using free must cause a linker error >>>>> void free( void * p ){ >>>>> void if_you_see_this_you_are_using_free_or_delete(); >>>>> if_you_see_this_you_are_using_free_or_delete(); >>>>> } >>>> >>>> free() is a no-no, I disable it after the first millisecond and let >>>> the program run for 10-40 years. >>> >>> So, all of your objects are persistent? (e.g., no "temporary/transient" >>> tasks that come and go -- instantiating then releasing their stacks, etc.) >> >> The nice thing about stacks is that the worst case stack allocation >> can be determined statically. Even if recursion is used, the worst >> case allocation as long as you have a strict limit to the recursion >> depth. >> >> The problem with heaps is the fragmentation of the heap in a long >> running system. It might seem running OK for the first year, but will >> it run for the next ten years ? > >That depends on how that memory is physically implemented. >E.g., slip a VMM under it and issues change (albeit at the >granularity of pages)
If you have some virtual memory system available that helps a lot Unfortunately a virtual memory system requires a backup storage to hold the "dirty" pages, a page file. If rotating fans and rotating disk can't be used, the only alternative is a Flash disk with limited number of write cycles.
On 08/11/16 12:38, upsidedown@downunder.com wrote:
> On Tue, 08 Nov 2016 10:37:51 +0100, David Brown > <david.brown@hesbynett.no> wrote: > >> On 07/11/16 22:11, Don Y wrote: >>> On 11/7/2016 1:56 PM, upsidedown@downunder.com wrote: >>>> On Mon, 7 Nov 2016 02:55:30 +0000 (UTC), antispam@math.uni.wroc.pl >>>> wrote: >>>> >>>>> Dave Nadler <drn@nadler.com> wrote: >>>>>> For years *one* of the *many* ways we've made reliable C++ >>>>>> embedded is by forbidding dynamic storage (except heap). >>>>> ^^^^ >>>>> You mean stack? Allocating memory at startup is reasonably >>>>> safe: each time you allocate the same amount so can easily >>>>> test if you stay within limit. Such allocation can be >>>>> useful for static constructors in library functions. >>>> >>>> Slightly off topic, why are C-programs so keen of using heap >>>> (malloc/free) instead of allocating temporary table variables >>>> (malloca) on stack as in e.g. Pascal ? >> >> Pascal and C both use the heap and the stack for the same purpose - >> there is no significant difference. Local variables are kept on the >> stack, allocated memory is on the heap. Allocating on the stack is >> usually a good deal faster, but the memory is deallocated whenever you >> leave the function that allocated it, and on some targets there may be >> more restrictions on the stack space than heap space. So stack >> allocation is great for short-lived dynamic memory, especially if it is >> small, but you can't hold on to the data after the function returns. > > If that data needs a longer lifetime, allocate it at a higher level in > the call stack and pass a pointer to the lower levels.
Sure, that can be done. But that won't help you implement a function called "allocateNewArray" :-) (For my own code, I try to minimise the use of dynamic memory, preferring to make everything statically allocated where possible and occasionally using VLAs on the stack. However, sometimes dynamic memory is almost unavoidable.)
> >> Also, there is no standard C stack space allocation function. Many >> (most?) C libraries have alloca(), which does the job - but it is not >> part of the standard. And while overloading the heap gives clear errors >> (malloc returns 0), overloading the stack is just undefined behaviour. > > Look at this issue from an application system upper level designer, > what do you do if the malloc() returns NULL in some deep function ? > What do you do ? > > The simplest case is to restart the program and hope that the problem > go away ? > > However, if the system must continue to work in a sensible way even if > malloc fails due to heap fragmentation, what do you do ? >
And that is one of the key reasons to avoid malloc and dynamic memory in embedded systems! Memory pools, sized allocation groups, etc., can reduce the potential problem somewhat - but it is still a danger. About the only time when I do use dynamic memory is for Ethernet networking - it is hard to completely avoid it. So the typical reaction is simply to drop that particular network packet or connection. It is messy to keep it all leak-free, but usually that's the least bad reaction to memory allocation failure.
On 08.11.2016 &#1075;. 13:52, upsidedown@downunder.com wrote:
> On Tue, 8 Nov 2016 02:31:42 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> On 11/8/2016 1:57 AM, upsidedown@downunder.com wrote: >>>>>> // using free must cause a linker error >>>>>> void free( void * p ){ >>>>>> void if_you_see_this_you_are_using_free_or_delete(); >>>>>> if_you_see_this_you_are_using_free_or_delete(); >>>>>> } >>>>> >>>>> free() is a no-no, I disable it after the first millisecond and let >>>>> the program run for 10-40 years. >>>> >>>> So, all of your objects are persistent? (e.g., no "temporary/transient" >>>> tasks that come and go -- instantiating then releasing their stacks, etc.) >>> >>> The nice thing about stacks is that the worst case stack allocation >>> can be determined statically. Even if recursion is used, the worst >>> case allocation as long as you have a strict limit to the recursion >>> depth. >>> >>> The problem with heaps is the fragmentation of the heap in a long >>> running system. It might seem running OK for the first year, but will >>> it run for the next ten years ? >> >> That depends on how that memory is physically implemented. >> E.g., slip a VMM under it and issues change (albeit at the >> granularity of pages) > > If you have some virtual memory system available that helps a lot > > Unfortunately a virtual memory system requires a backup storage to > hold the "dirty" pages, a page file. > > If rotating fans and rotating disk can't be used, the only alternative > is a Flash disk with limited number of write cycles. > >
While generally so VM does not necessarily need to swap memory. If your physical memory is say 64M and your logical space is set to say 1G _and_ you know the 64M will never be all used up, the benefit of the VM is that fragmentation will be much less of a problem. Then if memory is allocated using worst-fit strategy fragmentation is not much of an issue even without VM. To me (writing under DPS) stack usage and general allocation are just both in use, stacks are nice because of what you already stated, generic allocation (not sure how you'd call that, when you do a system call to allocate that much memory and you get returned the address and the actually allocated size (the latter probably somewhat more than requested because of the granularity)) is good because it can be deallocated "out of order" (unlike stack frames). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 11/8/2016 4:52 AM, upsidedown@downunder.com wrote:
> On Tue, 8 Nov 2016 02:31:42 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> On 11/8/2016 1:57 AM, upsidedown@downunder.com wrote: >>>>>> // using free must cause a linker error >>>>>> void free( void * p ){ >>>>>> void if_you_see_this_you_are_using_free_or_delete(); >>>>>> if_you_see_this_you_are_using_free_or_delete(); >>>>>> } >>>>> >>>>> free() is a no-no, I disable it after the first millisecond and let >>>>> the program run for 10-40 years. >>>> >>>> So, all of your objects are persistent? (e.g., no "temporary/transient" >>>> tasks that come and go -- instantiating then releasing their stacks, etc.) >>> >>> The nice thing about stacks is that the worst case stack allocation >>> can be determined statically. Even if recursion is used, the worst >>> case allocation as long as you have a strict limit to the recursion >>> depth. >>> >>> The problem with heaps is the fragmentation of the heap in a long >>> running system. It might seem running OK for the first year, but will >>> it run for the next ten years ? >> >> That depends on how that memory is physically implemented. >> E.g., slip a VMM under it and issues change (albeit at the >> granularity of pages) > > If you have some virtual memory system available that helps a lot > > Unfortunately a virtual memory system requires a backup storage to > hold the "dirty" pages, a page file.
No, it doesn't. You only need backing store if your total virtual memory (i.e., in use at any particular instant) will exceed your "real" physical memory. There are many reasons to implement VMM and benefits that come with it.
> If rotating fans and rotating disk can't be used, the only alternative > is a Flash disk with limited number of write cycles.
On 11/8/2016 4:38 AM, upsidedown@downunder.com wrote:
> Look at this issue from an application system upper level designer, > what do you do if the malloc() returns NULL in some deep function ? > What do you do ?
That depends on what the application can tolerate!
> The simplest case is to restart the program and hope that the problem > go away ?
No, the simplest case might be to "wait and try again". Or, better yet, tell the allocator to register your *request* and block until it can be satisfied (because you, the system designer, KNOW that the overload is temporary and, as long as other tasks -- incl the task that is currently holding memory to be released -- can continue to run, your needs will eventually be met) [What do you do if taskA's TRANSIENT request *happens* to be satisfied before -- or after! -- taskB's? Do you ensure you have overprovisioned just to address that potential case? Do you spin on malloc==NULL until it succeeds?]
> However, if the system must continue to work in a sensible way even if > malloc fails due to heap fragmentation, what do you do ?
Hi Dimiter,

On 11/8/2016 6:11 AM, Dimiter_Popoff wrote:
>>>> The problem with heaps is the fragmentation of the heap in a long >>>> running system. It might seem running OK for the first year, but will >>>> it run for the next ten years ? >>> >>> That depends on how that memory is physically implemented. >>> E.g., slip a VMM under it and issues change (albeit at the >>> granularity of pages) >> >> If you have some virtual memory system available that helps a lot >> >> Unfortunately a virtual memory system requires a backup storage to >> hold the "dirty" pages, a page file. >> >> If rotating fans and rotating disk can't be used, the only alternative >> is a Flash disk with limited number of write cycles. > > While generally so VM does not necessarily need to swap memory. > If your physical memory is say 64M and your logical space is set to > say 1G _and_ you know the 64M will never be all used up, the benefit > of the VM is that fragmentation will be much less of a problem.
It also lets you "write protect" memory, trap on writes to memory that shouldn't be written, trap on writes to memory that *should* be written, provide protected address spaces, share memory with different processes (and in different places), move large blocks of memory in constant time, etc. Put the stack "someplace special" (in that large, "empty" address space) and let the hardware tell you when you need to allocate another page for the stack -- or, when you've *underrun* it, etc. [Things that are hard to do with pure software mechanisms]
> Then if memory is allocated using worst-fit strategy fragmentation > is not much of an issue even without VM. > > To me (writing under DPS) stack usage and general allocation > are just both in use, stacks are nice because of what you already > stated, generic allocation (not sure how you'd call that, when > you do a system call to allocate that much memory and you get > returned the address and the actually allocated size (the latter > probably somewhat more than requested because of the granularity)) > is good because it can be deallocated "out of order" (unlike stack > frames).
The problem with most memory allocators is they are used for a hodge-podge of unstructured/unordered requests. E.g., override new() and you'll have a better chance at a "well behaved" memory subsystem.
On Monday, November 7, 2016 at 5:01:19 PM UTC-5, Wouter van Ooijen wrote:
> ...if possible, I prefer not to use malloc at all, > because it proves > beyond doubt that: > - malloc isn't used after that initial phase > - apart from stack size(s), the application will fit in memory
Exactly...
On Monday, November 7, 2016 at 12:59:30 PM UTC-5, Tim Wescott wrote:
> If you're not going to allow global constructors, > what's the point of using C++?
Because of ordering issues, I've typically placed "global lifetime" objects as static objects inside (ordered by code) subsystem initialization routines. Then referred to them via static pointers. Not ideal but workable way to order the initialization. Global ctors aside, C++ provides huge advantages: - type safety - RAII especially preventing resource leaks in dtors - templates - specialized storage allocation by class if required and so forth. All without any dynamic allocation except on stack. If only there was a way to have exceptions without heap; exceptions really do help make safer code. Might be possible in some C++ toolchains if throws are limited to pointers (to static exception info)? Depends I guess on the implementation (ie does exception processing rely on RTTI).