Forums

语言功能选择

开始了 不要 2017年3月5日
Op 08-Mar-17 om 13:18 schreef David Brown:
> On 08/03/17 10:49, Don Y wrote: >> On 3/7/2017 3:29 PM, Wouter van Ooijen wrote: >>> Op 07-Mar-17 om 18:31 schreef Don Y: >>>> On 3/6/2017 12:04 AM, Wouter van Ooijen wrote: >>>>> Op 06-Mar-17 om 00:03 schreef Don Y: >>>>>> A quick/informal/UNSCIENTIFIC poll: >>>>>> >>>>>> What *single* (non-traditional) language feature do you find most >>>>>> valuable in developing code? (and, applicable language if unique >>>>>> to *a* language or class of languages) >>>>> >>>>> I can't think of any language feature that is usefull in isolation. >>>> >>>> You, of course, wouldn't use *a* "single-feature language" for anything! >>>> >>>> OTOH, there are certain features of certain languages that are more >>>> useful/valuable than other "features". >>> >>> IMO no. The most usefull feature of a language is that its features >>> work well >>> together. >> >> So, you're saying that if I removed *any* one feature from ANY particular >> language (of your chosing), the language would be USELESS? >> >> Or, that if I were to enumerate all of the N "features" of a particular >> language and then systematically remove just ONE of those features -- >> creating >> N different versions of the language, in the process -- each would be >> equally >> useful/useless languages, as contrasted to the original? >> >> A "scientific calculator" is (for a certain class of applications) a more >> useful tool than a "four function calculator". What makes it so? Which >> feature ("function" in calculator-speak) added to a 4-function calculator >> represents the biggest incremental value increase? log? ln? y^x? sqrt? >> >> In a simpler scope, which of the features of the 4-function calculator >> would you "miss" most if it were elided? Would its removal render the >> 3-function calculator "useless"? >> >> [Personally, I'd rate division as the "most valuable" feature to have on >> such >> a calculator. I can do multiplication, addition and subtraction with >> relative ease. But, division is *tedious* and far more error prone than >> the other operations. OTOH, the "feature" that I would miss, most, would >> be "clear entry" -- as it allows me to correct typographical/keystroke >> errors >> without having to reissue the entire computation sequence (as would be >> required if I invoked "CLEAR")] >> >>> I am a C++ fan, mainly because it enables me to do a lot at >>> compile-time. But >>> the C++ features that enables this were, especially pre-C++11, >>> (templates) >>> rather clumsy to use. C++14 constexptr is much better designed an >>> mostly a joy >>> to use (bit there are still a few rough edges). >> >> So, you're admitting there are features of C++14 that are MORE useful than >> "equivalent" features from C++11. >> >> I.e., why aren't you STILL USING C++11?? You've decided (by your >> preference >> for C++14) that there are BETTER features in C++14 vs. C++11. So, you have >> admitted that some features are "better" (more valuable) than others. You >> didn't abandon C++11 (prior to 2014) because it didn't have the features >> of C++14! >> > > I can't answer for Wouter,
But you did so, and did it very well :) Wouter "Objects? No Thanks" van Ooijen
On Tue, 7 Mar 2017 00:27:57 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 3/6/2017 8:15 PM, George Neuner wrote: >> On Mon, 6 Mar 2017 13:36:49 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>> On 3/6/2017 11:33 AM, George Neuner wrote: >>> >>>> Closures. Far more useful than (OO) objects. >>> >>> But doesn't this complicate the run-time (e.g., drag in GC)? >>> Or, can all of this (subsetted) be handled at compile time? >> >> Not necessarily. Closures don't require GC at all unless they can >> persist beyond their definition scope. > >But that limits their utility.
Yes and no. It limits their scope downwards ... but that actually isn't as limiting as it sounds. It does not prevent, e.g., modular systems.
>> Think about nested functions as in Pascal, but augmented with private >> persistent variables (trying to avoid the word "static" to prevent >> unwanted associations with C). This form of closure is stack-strict >> and does not require any heap allocation[1]. > >Yes, but that limits the function's scope. It makes it difficult to >"import" these from other modules.
I think you're confusing the closure with the function, and the scope of the closure with the scope of the function. A closure can be defined over a function anywhere the function is in scope. A function F exported from module X may be used by a closure in module Y which imports X. Similarly a closure defined in module X may be exported from X as an opaque object. Recall that a module may require "initialization" when it is imported. Closures defined for export would be created at that time. You can do this even with stack bound closures. Consider that imported namespaces need to be available before the importing module's code can execute. However, even in the case of the 1st (top) module, the *process* invoking it already exists, and therefore there is already is a stack. With appropriate language support, a module which exports closures can construct them on the process stack at the point when the module is 1st imported. Then they would be available anywhere "below" the site of the import (subject to visibility). You would need to be careful of multiple imports in such a scenario, but that is simply a namespace issue: additional imports would simply reference the existing namespace and the closures created by the 1st import. [This would not be any different given heap closures - all the imports would still reference the same objects.]
>[E.g., consider the heap instantiator example: why does *it* have >to persist just so something it provides remains accessible?]
In any case, the functions involved cannot be unloaded (at least not easily) if they will be needed by something that is still running. My point, though, is that the function and a closure that uses it are 2 different things. Their lifetimes necessarily are linked, but their visibility scopes may be very different. George
On 3/8/2017 5:18 AM, David Brown wrote:
> On 08/03/17 10:49, Don Y wrote: >> On 3/7/2017 3:29 PM, Wouter van Ooijen wrote: >>> Op 07-Mar-17 om 18:31 schreef Don Y: >>>> On 3/6/2017 12:04 AM, Wouter van Ooijen wrote: >>>>> Op 06-Mar-17 om 00:03 schreef Don Y: >>>>>> A quick/informal/UNSCIENTIFIC poll: >>>>>> >>>>>> What *single* (non-traditional) language feature do you find most >>>>>> valuable in developing code? (and, applicable language if unique >>>>>> to *a* language or class of languages) >>>>> >>>>> I can't think of any language feature that is usefull in isolation. >>>> >>>> You, of course, wouldn't use *a* "single-feature language" for anything! >>>> >>>> OTOH, there are certain features of certain languages that are more >>>> useful/valuable than other "features". >>> >>> IMO no. The most usefull feature of a language is that its features >>> work well >>> together. >> >> So, you're saying that if I removed *any* one feature from ANY particular >> language (of your chosing), the language would be USELESS? >> >> Or, that if I were to enumerate all of the N "features" of a particular >> language and then systematically remove just ONE of those features -- >> creating >> N different versions of the language, in the process -- each would be >> equally >> useful/useless languages, as contrasted to the original? >> >> A "scientific calculator" is (for a certain class of applications) a more >> useful tool than a "four function calculator". What makes it so? Which >> feature ("function" in calculator-speak) added to a 4-function calculator >> represents the biggest incremental value increase? log? ln? y^x? sqrt? >> >> In a simpler scope, which of the features of the 4-function calculator >> would you "miss" most if it were elided? Would its removal render the >> 3-function calculator "useless"? >> >> [Personally, I'd rate division as the "most valuable" feature to have on >> such >> a calculator. I can do multiplication, addition and subtraction with >> relative ease. But, division is *tedious* and far more error prone than >> the other operations. OTOH, the "feature" that I would miss, most, would >> be "clear entry" -- as it allows me to correct typographical/keystroke >> errors >> without having to reissue the entire computation sequence (as would be >> required if I invoked "CLEAR")] >> >>> I am a C++ fan, mainly because it enables me to do a lot at >>> compile-time. But >>> the C++ features that enables this were, especially pre-C++11, >>> (templates) >>> rather clumsy to use. C++14 constexptr is much better designed an >>> mostly a joy >>> to use (bit there are still a few rough edges). >> >> So, you're admitting there are features of C++14 that are MORE useful than >> "equivalent" features from C++11. >> >> I.e., why aren't you STILL USING C++11?? You've decided (by your >> preference >> for C++14) that there are BETTER features in C++14 vs. C++11. So, you have >> admitted that some features are "better" (more valuable) than others. You >> didn't abandon C++11 (prior to 2014) because it didn't have the features >> of C++14! >> > > I can't answer for Wouter, but to me this sounds like you are trying to > make arguments for argument's sake. What is so hard to understand about > someone using C++11 when that was well supported, but once C++14 was > common enough in tools, switching to C++14 to take advantage of useful > new features?
USEFUL NEW FEATURES.
On 3/8/2017 2:14 AM, David Brown wrote:
> On 07/03/17 17:34, Don Y wrote: >> On 3/7/2017 4:24 AM, David Brown wrote: >>>> And, formed preferences for particular languages largely based on >>>> what the language allowed them to *do* -- and, at some fuzzy >>>> point in time, they thought to themselves: "<previous_language> >>>> wouldn't let me *do* things like this!" >>> >>> Agreed (for people that have learned more than one language). >>> >>> But what people view as "traditional features" is going to be >>> based almost totally on the languages they are most familiar with. >>> If you learned to program using Haskell, then you are going to >>> think in terms of lists, recursion, functions-of-functions, etc. >>> The idea of incrementing a variable or using a pointer would be >>> completely alien - but infinite lists are perfectly normal and >>> "traditional". The opposite is true for someone programming in C. >> >> That's entirely my point! Should, instead, we argue over what >> "traditional" means in the UNIVERSE of "programming" languages? >> *Then*, individually decide which features "stand out" as "valuable" >> in our own minds? > > I am sorry, I am still having trouble trying to figure out what you are > asking for here.
Then I guess you can't answer the question (though others seem able to)
On 3/8/2017 10:20 AM, George Neuner wrote:
>>> Think about nested functions as in Pascal, but augmented with private >>> persistent variables (trying to avoid the word "static" to prevent >>> unwanted associations with C). This form of closure is stack-strict >>> and does not require any heap allocation[1]. >> >> Yes, but that limits the function's scope. It makes it difficult to >> "import" these from other modules. > > I think you're confusing the closure with the function, and the scope > of the closure with the scope of the function. > > A closure can be defined over a function anywhere the function is in > scope. A function F exported from module X may be used by a closure > in module Y which imports X. Similarly a closure defined in module X > may be exported from X as an opaque object.
Two heaps created, each by referencing a function: instantiate_heap(memory, metrics, allocation_policy, release_policy, ...) Module that *defines* that function must remain "loaded" as it defines the memory managers (directly or indirectly, depending on what is return-ed) for each heap. Any "choices" that are stored (again, avoiding "static") in the function's definition need to persist beyond the *invocation* of instantiate_heap.
> Recall that a module may require "initialization" when it is imported. > Closures defined for export would be created at that time.
Yes. But who ensures they remain present <somewhere> after the module itself has served its purpose? I.e., the state is "hidden" and not obvious to the "invoker". [If initialize_heap RETURNS "memory manager"s that are then used to manage each individual heap, then initialize_heap() itself is still dynamically bound to those objects. If you unload the module containing it, then the memory manager objects (in this example) that it created for the callers also disappears.]
> You can do this even with stack bound closures. Consider that > imported namespaces need to be available before the importing module's > code can execute. However, even in the case of the 1st (top) module, > the *process* invoking it already exists, and therefore there is > already is a stack. > > With appropriate language support, a module which exports closures can > construct them on the process stack at the point when the module is > 1st imported. Then they would be available anywhere "below" the site > of the import (subject to visibility).
So, the memory manager example would necessitate loading that "system object" (memory MANAGER) into the user's process space. Or, keeping process/task specific "state" like that in some protected per-process portion of privileged memory.
> You would need to be careful of multiple imports in such a scenario, > but that is simply a namespace issue: additional imports would simply > reference the existing namespace and the closures created by the 1st > import. > [This would not be any different given heap closures - all the imports > would still reference the same objects.] > > >> [E.g., consider the heap instantiator example: why does *it* have >> to persist just so something it provides remains accessible?] > > In any case, the functions involved cannot be unloaded (at least not > easily) if they will be needed by something that is still running.
Does the language magically enforce this dependence? Or, does the developer have to be aware of the potential "gotcha"? Or, does something else reference count, etc.? E.g., I rely on manual handling of pointers to implement these sorts of mechanisms. But, it is clear to me that I am storing *pointers* and, as such, have to ensure that the object pointed AT remains accessible throughout its potential use. (like an object WITHIN a particular module) [This makes for interesting design tradeoffs as you decide which objects and portions of objects can migrate and which must remain /in situ/. It *also* makes security analysis more difficult as I have to consider the possibility of "untrusted" code being executed /at priviledge/.] If a dependence exists on a "foreign" module, then the OS inherently tracks the object reference and will prevent the referenced object from being deleted/unloaded while the handle is still active. (though it doesn't guarantee that the object is deleted as soon as the last handle is removed)
> My point, though, is that the function and a closure that uses it are > 2 different things. Their lifetimes necessarily are linked, but their > visibility scopes may be very different.
On 08/03/17 18:35, Don Y wrote:
> On 3/8/2017 5:18 AM, David Brown wrote: >> On 08/03/17 10:49, Don Y wrote: >>> On 3/7/2017 3:29 PM, Wouter van Ooijen wrote:
>>>> I am a C++ fan, mainly because it enables me to do a lot at >>>> compile-time. But >>>> the C++ features that enables this were, especially pre-C++11, >>>> (templates) >>>> rather clumsy to use. C++14 constexptr is much better designed an >>>> mostly a joy >>>> to use (bit there are still a few rough edges). >>> >>> So, you're admitting there are features of C++14 that are MORE useful >>> than >>> "equivalent" features from C++11. >>> >>> I.e., why aren't you STILL USING C++11?? You've decided (by your >>> preference >>> for C++14) that there are BETTER features in C++14 vs. C++11. So, >>> you have >>> admitted that some features are "better" (more valuable) than >>> others. You >>> didn't abandon C++11 (prior to 2014) because it didn't have the features >>> of C++14! >>> >> >> I can't answer for Wouter, but to me this sounds like you are trying to >> make arguments for argument's sake. What is so hard to understand about >> someone using C++11 when that was well supported, but once C++14 was >> common enough in tools, switching to C++14 to take advantage of useful >> new features? > > USEFUL NEW FEATURES.
C++14 is an evolution from C++11, not a revolution (C++11 was a revolution from C++03). There are lots of small improvements, such as "constexpr" being more flexible or binary literals being part of the standard rather than a common extension. I don't think anyone is going to look at a programming problem and say "I can do this nicely in C++14, but it will be ugly and painful in C++11". But if they have the opportunity to use C++14 rather than C++11, then people will usually prefer it for the small advantages.
On 08/03/17 18:37, Don Y wrote:
> On 3/8/2017 2:14 AM, David Brown wrote: >> On 07/03/17 17:34, Don Y wrote: >>> On 3/7/2017 4:24 AM, David Brown wrote: >>>>> And, formed preferences for particular languages largely based on >>>>> what the language allowed them to *do* -- and, at some fuzzy >>>>> point in time, they thought to themselves: "<previous_language> >>>>> wouldn't let me *do* things like this!" >>>> >>>> Agreed (for people that have learned more than one language). >>>> >>>> But what people view as "traditional features" is going to be >>>> based almost totally on the languages they are most familiar with. >>>> If you learned to program using Haskell, then you are going to >>>> think in terms of lists, recursion, functions-of-functions, etc. >>>> The idea of incrementing a variable or using a pointer would be >>>> completely alien - but infinite lists are perfectly normal and >>>> "traditional". The opposite is true for someone programming in C. >>> >>> That's entirely my point! Should, instead, we argue over what >>> "traditional" means in the UNIVERSE of "programming" languages? >>> *Then*, individually decide which features "stand out" as "valuable" >>> in our own minds? >> >> I am sorry, I am still having trouble trying to figure out what you are >> asking for here. > > Then I guess you can't answer the question (though others seem able to) >
People (including me) have been contributing to this thread - but I can't tell if anyone is answering your question. However, there is some interesting exchange of ideas going on - it doesn't really matter what your question is, or if we are answering it, as long as it's an enjoyable discussion.
On 2017-03-08 3:46 AM, David Brown wrote:
> On 08/03/17 05:10, Walter Banks wrote: >> On 2017-03-05 5:03 PM, Don Y wrote: >>> A quick/informal/UNSCIENTIFIC poll: >>> >>> What *single* (non-traditional) language feature do you find >>> most valuable in developing code? (and, applicable language if >>> unique to *a* language or class of languages) >> >> The @ in various forms to tie a physical address to a symbolic >> variable. This construct more that any other single thing allows >> many high level languages have the ability broaden the range of >> potential applications from high level to close to the machine. >> >> It is language independent and very easy to add to compilers >> without changing the basic form of the language. >> > > Do you mean having a compiler extension for: > > volatile uint8_t REG @ 0x1234; > > rather than the standard C: > > #define REG (*((volatile uint8_t*) 0x1234)) > > Certainly the "@" syntax is neater, and certainly it is nice if > means "REG" turns up in the linker map file and debugging data. But > it is hardly a breakthrough, and does not allow anything that cannot > be done in normal C syntax just as efficiently (assuming the > compiler implementation is sane). > > Most embedded compilers don't have anything equivalent to the "@" > syntax - yet people seem to manage to use them perfectly well for > "close to the machine" programming.
I saw it as an abstract question. My comment was a clean simple way to directly interface to the machine completely separate from the language proper. Developers in a surprising number of languages have found some round about way to accomplish this demonstrating its need. The C constant pointer Kludge although functionally similar generally misses out having proper symbolic debugging support. uint8_t REG @ 0x1234; is just as viable a declaration without the volatile. Many if not most non open source compilers for embedded systems some form @. The concept as I have found is just as useful in many languages I have used. w..
On 2017-03-07 10:35 PM, Paul Rubin wrote:
> Walter Banks <walter@bytecraft.com> writes: >> The @ in various forms to tie a physical address to a symbolic >> variable. > > Do you mean pointer variables, like *x in C? > >> It is language independent and very easy to add to compilers >> without changing the basic form of the language. > > True in that the input programs can look about the same as before. > But it can change the range of behaviours possible to the programs, > making them more flexible (maybe good) but less predictable (maybe > bad). So it's a trade-off like lots of other things are. >
I was thinking of something functionally similar to the outcome or constant pointers without the baggage. C pointer to (*x) variables can access specific physical locations is not the same as a simple address assignment to a variable with source level debugging and symbol table support to support that variable at that address. I wasn't specifically thinking of C when I posted the comment. It is a valid comment in many languages. w..
On 2017-03-08 12:48 AM, Don Y wrote:
> On 3/7/2017 9:10 PM, Walter Banks wrote: >> On 2017-03-05 5:03 PM, Don Y wrote: >>> A quick/informal/UNSCIENTIFIC poll: >>> >>> What *single* (non-traditional) language feature do you find >>> most valuable in developing code? (and, applicable language if >>> unique to *a* language or class of languages) >> >> The @ in various forms to tie a physical address to a symbolic >> variable. This construct more that any other single thing allows >> many high level languages have the ability broaden the range of >> potential applications from high level to close to the machine. > > I assume you mean the "address of" operator? I.e., "where is > 'foo'?" > >> It is language independent and very easy to add to compilers >> without changing the basic form of the language. > > But, by itself, it is of little use. So, you know *where* something > is. But, if you can't dereference the address (pointer), you can't > affect any changes to the environment around it (the > pointer/referenced object).
Far less complex. A simple way to declare a variable at a specific physical address. Every other variable attribute remains untouched. It is in the symbol table, debug data and can be referenced like any other variable. You can do address of, set a pointer to it assign and read it. w..