|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
A very interesting view about programmer as a profession can be found in Andrey Ershov paper: Aesthetics and the Human Factor in Programming (CACM, 1972). Another good article is Knuth Computer Programming as an Art
|
Slightly simplifying there are two contrasting ideologies of programming: Software Realism and Software Idealism. The latter in their turn can be subdivided into two major schools: Stallmanism and Raymondism.
Here is how I defined them in 1999 (the quote below is taken from Softpanorama OSS page):
In the vision of Software Realists programmers like all humans have weaknesses and guided primarily by self-interest and as such requiring formal organization and incentives (direct or indirect) to act outside the limits defined by their own self-interest. Software Realists see the evils of the software world as given and derived from the limited and unhappy choices available, given the inherent moral and intellectual limitations of human beings and existing hardware. Some of them try to create better software like programmers involved in creation of all BSD flavors but they consider commercial programmers as equals (actually many of them wear two hats) and do not object to reusing the code that they developed pro bono in proprietary software.
In this slightly tragic worldview the software development is a hard and often underappreciated labor that requires special, pretty rare, talent People with this talent like talented sport stars (for example tennis players or ice ballet dancers) risk their health while they are young creating short living but tremendously complex artifacts (large software systems are probably the most complex system even created by humans) and for this reason should be remunerated accordingly. It is important to understand that like in case of artists creating paintings on the sand beach programmers creation are short-lived and the new wave of hardware often wipes them clean. For example, who now remembers the creators of PL/1 optimizing and debugging compilers, the compilers that were real breakthrough in many areas of complier writing art and in comparison with which some modern compliers still look like junk despite the fact that they were written 30 years ago.In other words they see that due to immense complexity of those artifacts all software sucks. It is just that some software sucks less. It can be proprietary software, it can be free software -- all depends of the talent of the creators not so much on a particular ideology (which, by the way, can be completely wrong: Soviets invented quite a few new technologies and managed to launch the first man into space). Thus, in view of Software Realists school the perfection of software is unachievable and old good Unix with its classic codebase might sometimes be preferable to new Turks be it Windows or Linux. That does not mean that Linux or Windows codebase is all crap. That just means that they are just another OSes in a long historical line of such systems. In some areas they might be better then competition, in some worse, but none has the monopoly on innovation (actually Linux is a pretty conservative kernel despite a radical ideology behind it).
Software Realists are unconvinced by claims of linux superiority and want to see hard facts. Furthermore, history guided them that the proper tradeoffs between different subsystems of OSes can be ironed out only via long, expensive and painful process that requires strong highly paid managers, programmers and testers who are ready to sacrifices their health for the success of their creation. The real art requires sacrifices and it is better when such sacrifices are properly remunerated, although stores of talented artists who died in poverty are nothing new.
Because real talents are rare good software is a very expensive thing. Software realism school presuppose that modern software is almost always a compromise between the demands of the talent and demands of the marketplace.
The Software Idealist school (both in its Stallmanism and Raymondism incarnations) holds that mankind in general and programmers in particular has not yet achieved their full moral potential, and that they are (at least in principle) perfectible if guided by wonderful new software development ideology. Foolish and immoral choices inherent in the creation of proprietary software explains the all the evils of the existing software world and revolution was needed and actually already came in the form of either free software movement or its less pure form called open source software movement. Both major Software Idealism schools rely on volunteer labor of programmers connected via Internet to produce immortal gems of software wisdom that will crush proprietary software developers like cockroaches.
In Software Idealism worldview, whether purely moral incentive actually work in a long run or not and what will happen when Linus Torvalds will become yet another retired dot-com multimillionaire with his own yacht is irrelevant to the achievement of true software justice, justice for all. This utopian view holds that volunteer programmer potential is immense and can do everything including improving human nature that should get rid of those outdated economic rewards for software development and be satisfied working part time in McDonalds in order to be able to participate in the movement. So the Software Idealism vision promotes pursuit of the highest ideals which somehow guarantee the best software solutions. At the end of the day new liberated men should all storm this evil Bastille of software oppression which is of course Microsoft and dance on its ruins. Sometimes in their enthusiasm they can attack other sinful old fashioned proprietary software vendors instead of Microsoft. Until opening Solaris (and even after that) sometimes their target was Sun.
And if the unwashed masses who corrupt their soils by using Windows are slow in catching on, then it is the role of the intellectual vanguard (the keepers of programming craft who in Eastern Europe might be called "programming intelligentsia") to lead them - even if in the short run, the masses can be unhappy with the results because they might not be able to use full capabilities of their laptops, cameras or scanners. In general Software Idealists think that higher considerations should guide us and that those consideration somehow guarantee creation of a better software, the software that is not only better but which is as perfect as it is free.
Again this is a slight simplification but I believe it somehow catches the key points.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Jan 01, 2019 | softwareengineering.stackexchange.com
Is premature optimization really the root of all evil? Ask Question Asked 11 years, 11 months ago Active 10 months ago Viewed 71k times
A colleague of mine today committed a class called
ThreadLocalFormat
, which basically moved instances of Java Format classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many". He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a case of "premature optimization" and how bad is it really? design architecture optimization quality-attributes share improve this question follow edited Dec 5 '19 at 3:54 community wiki
3 revs, 3 users 67%
Craig DayAlex ,
I think you need to distinguish between premature optimization, and unnecessary optimization. Premature to me suggests 'too early in the life cycle' whereas unnecessary suggests 'does not add significant value'. IMO, requirement for late optimization implies shoddy design. – Shane MacLaughlin Oct 17 '08 at 8:532 revs, 2 users 92%
, 2014-12-11 17:46:38345It's important to keep in mind the full quote:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
What this means is that, in the absence of measured performance issues you shouldn't optimize because you think you will get a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that isn't a trivially clear optimization should be avoided until it can be measured.
The biggest problems with "premature optimization" are that it can introduce unexpected bugs and can be a huge time waster. share improve this answer follow edited Dec 11 '14 at 17:46 community wiki
2 revs, 2 users 92%
Scott DormanErik Kaplun ,
Being from Donald Knuth, I wouldn't be surprized if he had some evidence to back it up. BTW, Src: Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268. citeseerx.ist.psu.edu/viewdoc/ – mctylr Mar 1 '10 at 17:572 revs, 2 users 90%
, 2015-10-06 13:07:11120Premature micro optimizations are the root of all evil, because micro optimizations leave out context. They almost never behave the way they are expected.
What are some good early optimizations in the order of importance:
- Architectural optimizations (application structure, the way it is componentized and layered)
- Data flow optimizations (inside and outside of application)
Some mid development cycle optimizations:
- Data structures, introduce new data structures that have better performance or lower overhead if necessary
- Algorithms (now its a good time to start deciding between quicksort3 and heapsort ;-) )
Some end development cycle optimizations
- Finding code hotpots (tight loops, that should be optimized)
- Profiling based optimizations of computational parts of the code
- Micro optimizations can be done now as they are done in the context of the application and their impact can be measured correctly.
Not all early optimizations are evil, micro optimizations are evil if done at the wrong time in the development life cycle , as they can negatively affect architecture, can negatively affect initial productivity, can be irrelevant performance wise or even have a detrimental effect at the end of development due to different environment conditions.
If performance is of concern (and always should be) always think big . Performance is a bigger picture and not about things like: should I use int or long ?. Go for Top Down when working with performance instead of Bottom Up . share improve this answer follow edited Oct 6 '15 at 13:07 community wiki
2 revs, 2 users 90%
Pop CatalinRon Ruble ,
"Optimization: Your Worst Enemy", by Joseph M. Newcomer: flounder.com/optimization.htm – Ron Ruble May 23 '17 at 21:50Jeff Atwood , 2008-10-17 09:29:14
54optimization without first measuring is almost always premature.
I believe that's true in this case, and true in the general case as well. share improve this answer follow answered Oct 17 '08 at 9:29 community wiki
Jeff AtwoodBengie ,
Here Here! Unconsidered optimization makes code un-maintainable and is often the cause of performance problems. e.g. You multi-thread a program because you imagine it might help performance, but, the real solution would have been multiple processes which are now too complex to implement. – James Anderson May 2 '12 at 5:01John Mulder , 2008-10-17 08:42:58
45Optimization is "evil" if it causes:
- less clear code
- significantly more code
- less secure code
- wasted programmer time
In your case, it seems like a little programmer time was already spent, the code was not too complex (a guess from your comment that everyone on the team would be able to understand), and the code is a bit more future proof (being thread safe now, if I understood your description). Sounds like only a little evil. :) share improve this answer follow answered Oct 17 '08 at 8:42 community wiki
John Muldermattnz ,
Only if the cost, it terms of your bullet points, is greater than the amortized value delivered. Often complexity introduces value, and in these cases one can encapsulate it such that it passes your criteria. It also gets reused and continues to provide more value. – Shane MacLaughlin Oct 17 '08 at 10:36Michael Shaw , 2020-06-16 10:01:49
42I'm surprised that this question is 5 years old, and yet nobody has posted more of what Knuth had to say than a couple of sentences. The couple of paragraphs surrounding the famous quote explain it quite well. The paper that is being quoted is called " Structured Programming with go to Statements ", and while it's nearly 40 years old, is about a controversy and a software movement that both no longer exist, and has examples in programming languages that many people have never heard of, a surprisingly large amount of what it said still applies.
Here's a larger quote (from page 8 of the pdf, page 268 in the original):
The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.
Another good bit from the previous page:
share improve this answer follow edited Jun 16 at 10:01 community wikiMy own programming style has of course changed during the last decade, according to the trends of the times (e.g., I'm not quite so tricky anymore, and I use fewer go to's), but the major change in my style has been due to this inner loop phenomenon. I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure (as in the change from Example 1 to Example 2) so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn't take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.
Michael Shaw> ,
add a comment> ,
22I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could probably be made faster quite easily, without increasing code size or compromising its readability.
In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm and then throw it away completely in favor of a O(log N) approach.
Note the words may be : if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload in the long run.
Thus, in general, I think the right approach is to find out what your options are before you start writing code, and consciously choose the best algorithm for your situation. Most importantly, the phrase "premature optimization is the root of all evil" is no excuse for ignorance. Career developers should have a general idea of how much common operations cost; they should know, for example,
- that strings cost more than numbers
- that dynamic languages are much slower than statically-typed languages
- the advantages of array/vector lists over linked lists, and vice versa
- when to use a hashtable, when to use a sorted map, and when to use a heap
- that (if they work with mobile devices) "double" and "int" have similar performance on desktops (FP may even be faster) but "double" may be a hundred times slower on low-end mobile devices without FPUs;
- that transferring data over the internet is slower than HDD access, HDDs are vastly slower than RAM, RAM is much slower than L1 cache and registers, and internet operations may block indefinitely (and fail at any time).
And developers should be familiar with a toolbox of data structures and algorithms so that they can easily use the right tools for the job.
Having plenty of knowledge and a personal toolbox enables you to optimize almost effortlessly. Putting a lot of effort into an optimization that might be unnecessary is evil (and I admit to falling into that trap more than once). But when optimization is as easy as picking a set/hashtable instead of an array, or storing a list of numbers in double[] instead of string[], then why not? I might be disagreeing with Knuth here, I'm not sure, but I think he was talking about low-level optimization whereas I am talking about high-level optimization.
Remember, that quote is originally from 1974. In 1974 computers were slow and computing power was expensive, which gave some developers a tendency to overoptimize, line-by-line. I think that's what Knuth was pushing against. He wasn't saying "don't worry about performance at all", because in 1974 that would just be crazy talk. Knuth was explaining how to optimize; in short, one should focus only on the bottlenecks, and before you do that you must perform measurements to find the bottlenecks.
Note that you can't find the bottlenecks until you have written a program to measure, which means that some performance decisions must be made before anything exists to measure. Sometimes these decisions are difficult to change if you get them wrong. For this reason, it's good to have a general idea of what things cost so you can make reasonable decisions when no hard data is available.
How early to optimize, and how much to worry about performance depend on the job. When writing scripts that you'll only run a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and you're working on a library that thousands of other developers are going to use in thousands of different ways, it may pay to optimize the hell out of it, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must always be balanced against the need for readability, maintainability, elegance, extensibility, and so on.
May 27, 2020 | techrights.org
...it was developed along lines that are not entirely different from Microsoft's EEE tactics -- which today I will offer a new acronym and description for:
1. Steal
2. Add Bloat
3. Original TrashedIt's difficult conceptually to "steal" Free software, because it (sort of, effectively) belongs to everyone. It's not always Public Domain -- copyleft is meant to prevent that. The only way you can "steal" free software is by taking it from everyone and restricting it again. That's like "stealing" the ocean or the sky, and putting it somewhere that people can't get to it. But this is what non-free software does. (You could also simply go against the license terms, but I doubt Stallman would go for the word "stealing" or "theft" as a first choice to describe non-compliance).
... ... ...
Again and again, Microsoft "Steals" or "Steers" the development process itself so it can gain control (pronounced: "ownership") of the software. It is a gradual process, where Microsoft has more and more influence until they dominate the project and with it, the user. This is similar to the process where cults (or drug addiction) take over people's lives, and similar to the process where narcissists interfere in the lives of others -- by staking a claim and gradually dominating the person or project.
Then they Add Bloat -- more features. GitHub is friendly to use, you don't have to care about how Git works to use it (this is true of many GitHub clones as well, as even I do not really care how Git works very much. It took a long time for someone to even drag me towards GitHub for code hosting, until they were acquired and I stopped using it) and due to its GLOBAL size, nobody can or ought to reproduce its network effects.
I understand the draw of network effects. That's why larger federated instances of code hosts are going to be more popular than smaller instances. We really need a mix -- smaller instances to be easy to host and autonomous, larger instances to draw people away from even more gigantic code silos. We can't get away from network effects (just like the War on Drugs will never work) but we can make them easier and less troublesome (or safer) to deal with.
Finally, the Original is trashed, and the SABOTage is complete. This has happened with Python against Python 2, despite protests from seasoned and professional developers, it was deliberately attempted with Systemd against not just sysvinit but ALL alternatives -- Free software acts like proprietary software when it treats the existence of alternatives as a problem to be solved. I personally never trust a project with developers as arrogant as that.
I
... ... ...
There's a meme about creepy vans with "FREE CANDY" painted on the side, which I took one of the photos from and edited it so that it said "FEATURES" instead. This is more or less how I feel about new features in general, given my experience with their abuse in development, marketing and the takeover of formerly good software projects.
People then accuse me of being against features, of course. As with the Dijkstra article, the real problem isn't Basic itself. The problem isn't features per se (though they do play a very key role in this problem) and I'm not really against features -- or candy, for that matter.
I'm against these things being used as bait, to entrap people in an unpleasant situation that makes escape difficult. You know, "lock-in". Don't get in the van -- don't even go NEAR the van.
Candy is nice, and some features are nice too. But we would all be better off if we could get the candy safely, and delete the creepy horrible van that comes with it. That's true whether the creepy van is GitHub, or surveillance by GIAFAM, or a Leviathan "init" system, or just breaking decades of perfectly good Python code, to try to force people to develop differently because Google or Microsoft (who both have had heavy influence over newer Python development) want to try to force you to -- all while using "free" software.
If all that makes free software "free" is the license -- (yes, it's the primary and key part, it's a necessary ingredient) then putting "free" software on GitHub shouldn't be a problem, right? Not if you're running LibreJS, at least.
In practice, "Free in license only" ignores the fact that if software is effectively free, the user is also effectively free. If free software development gets dragged into doing the bidding of non-free software companies and starts creating lock-in for the user, even if it's external or peripheral, then they simply found an effective way around the true goal of the license. They did it with Tivoisation, so we know that it's possible. They've done this in a number of ways, and they're doing it now.
If people are trying to make the user less free, and they're effectively making the user less free, maybe the license isn't an effective monolithic solution. The cost of freedom is eternal vigilance. They never said "The cost of freedom is slapping a free license on things", as far as I know. (Of course it helps). This really isn't a straw man, so much as a rebuttal to the extremely glib take on software freedom in general that permeates development communities these days.
But the benefits of Free software, free candy and new features are all meaningless, if the user isn't in control.
Don't get in the van.
"The freedom to NOT run the software, to be free to avoid vendor lock-in through appropriate modularization/encapsulation and minimized dependencies; meaning any free software can be replaced with a user's preferred alternatives (freedom 4)." – Peter Boughton
... ... ...
Dec 18, 2017 | esr.ibiblio.org
Posted on 2017-12-18 by esr In recent discussion on this blog of the GCC repository transition and reposurgeon, I observed "If I'd been restricted to C, forget it – reposurgeon wouldn't have happened at all"
I should be more specific about this, since I think the underlying problem is general to a great deal more that the implementation of reposurgeon. It ties back to a lot of recent discussion here of C, Python, Go, and the transition to a post-C world that I think I see happening in systems programming.
(This post perhaps best viewed as a continuation of my three-part series: The long goodbye to C , The big break in computer languages , and Language engineering for great justice .)
I shall start by urging that you must take me seriously when I speak of C's limitations. I've been programming in C for 35 years. Some of my oldest C code is still in wide production use. Speaking from that experience, I say there are some things only a damn fool tries to do in C, or in any other language without automatic memory management (AMM, for the rest of this article).
This is another angle on Greenspun's Law: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp." Anyone who's been in the trenches long enough gets that Greenspun's real point is not about C or Fortran or Common Lisp. His maxim could be generalized in a Henry-Spencer-does-Santyana style as this:
"At any sufficient scale, those who do not have automatic memory management in their language are condemned to reinvent it, poorly."
In other words, there's a complexity threshold above which lack of AMM becomes intolerable. Lack of it either makes expressive programming in your application domain impossible or sends your defect rate skyrocketing, or both. Usually both.
When you hit that point in a language like C (or C++), your way out is usually to write an ad-hoc layer or a bunch of semi-disconnected little facilities that implement parts of an AMM layer, poorly. Hello, Greenspun's Law!
It's not particularly the line count of your source code driving this, but rather the complexity of the data structures it uses internally; I'll call this its "greenspunity". Large programs that process data in simple, linear, straight-through ways may evade needing an ad-hoc AMM layer. Smaller ones with gnarlier data management (higher greenspunity) won't. Anything that has to do – for example – graph theory is doomed to need one (why, hello, there, reposurgeon!)
There's a trap waiting here. As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.
Sometimes you really have no alternative but to be stuck with an ad-hoc AMM layer. Usually you get pinned to this situation because real AMM would impose latency costs you can't afford. The major case of this is operating-system kernels. I could say a lot more about the costs and contortions this forces you to assume, and perhaps I will in a future post, but it's out of scope for this one.
On the other hand, reposurgeon is representative of a very large class of "systems" programs that don't have these tight latency constraints. Before I get to back to the implications of not being latency constrained, one last thing – the most important thing – about escalating AMM-layer complexity.
At high enough levels of greenspunity, the effort required to build and maintain your ad-hoc AMM layer becomes a black hole. You can't actually make any progress on the application domain at all – when you try it's like being nibbled to death by ducks.
Now consider this prospectively, from the point of view of someone like me who has architect skill. A lot of that skill is being pretty good at visualizing the data flows and structures – and thus estimating the greenspunity – implied by a problem domain. Before you've written any code, that is.
If you see the world that way, possible projects will be divided into "Yes, can be done in a language without AMM." versus "Nope. Nope. Nope. Not a damn fool, it's a black hole, ain't nohow going there without AMM."
This is why I said that if I were restricted to C, reposurgeon would never have happened at all. I wasn't being hyperbolic – that evaluation comes from a cool and exact sense of how far reposurgeon's problem domain floats above the greenspunity level where an ad-hoc AMM layer becomes a black hole. I shudder just thinking about it.
Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer. But, though software is sometimes written by people who are exceptionally good at managing that kind of hair, it then generally has to be maintained by people who are less so
The really smart people in my audience have already figured out that this is why Ken Thompson, the co-designer of C, put AMM in Go, in spite of the latency issues.
Ken understands something large and simple. Software expands, not just in line count but in greenspunity, to meet hardware capacity and user demand. In languages like C and C++ we are approaching a point of singularity at which typical – not just worst-case – greenspunity is so high that the ad-hoc AMM becomes a black hole, or at best a trap nigh-indistinguishable from one.
Thus, Go. It didn't have to be Go; I'm not actually being a partisan for that language here. It could have been (say) Ocaml, or any of half a dozen other languages I can think of. The point is the combination of AMM with compiled-code speed is ceasing to be a luxury option; increasingly it will be baseline for getting most kinds of systems work done at all.
Sociologically, this implies an interesting split. Historically the boundary between systems work under hard latency constraints and systems work without it has been blurry and permeable. People on both sides of it coded in C and skillsets were similar. People like me who mostly do out-of-kernel systems work but have code in several different kernels were, if not common, at least not odd outliers.
Increasingly, I think, this will cease being true. Out-of-kernel work will move to Go, or languages in its class. C – or non-AMM languages intended as C successors, like Rust – will keep kernels and real-time firmware, at least for the foreseeable future. Skillsets will diverge.
It'll be a more fragmented systems-programming world. Oh well; one does what one must, and the tide of rising software complexity is not about to be turned. This entry was posted in General , Software by esr . Bookmark the permalink . 144 thoughts on "C, Python, Go, and the Generalized Greenspun Law"
Leave a Reply Cancel reply
- David Collier-Brown on 2017-12-18 at 17:38:05 said: Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory. Reply ↓
- esr on 2017-12-18 at 17:40:45 said: >Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory.
Oh, neat. I think that's a closer approximation to the most general statement than Greenspun's, actually. Reply ↓
- Alex K. on 2017-12-20 at 09:50:37 said: For today, maybe -- but the first time I had Greenspun's Tenth quoted at me was in the late '90s. [I know this was around/just before the first C++ standard, maybe contrasting it to this new upstart Java thing?] This was definitely during the era where big computers still did your serious work, and pretty much all of it was in either C, COBOL, or FORTRAN. [Yeah, yeah, I know– COBOL is all caps for being an acronym, while Fortran ain't–but since I'm talking about an earlier epoch of computing, I'm going to use the conventions of that era.]
Now the Object-Oriented paradigm has really mitigated this to an enormous degree, but I seem to recall at that time the argument was that multimethod dispatch (a benefit so great you happily accept the flaw of memory management) was the Killer Feature of LISP.
Given the way the other advantage I would have given Lisp over the past two decades–anonymous functions [lambdas] and treating them as first-class values–are creeping into a more mainstream usage, I think automated memory management is the last visible "Lispy" feature people will associate with Greenspun. [What, are you now visualizing lisp macros? Perish the thought–anytime I see a foot cannon that big, I stop calling it a feature ] Reply ↓
- Mycroft Jones on 2017-12-18 at 17:41:04 said: After looking at the Linear Lisp paper, I think that is where Lutz Mueller got One Reference Only memory management from. For automatic memory management, I'm a big fan of ORO. Not sure how to apply it to a statically typed language though. Wish it was available for Go. ORO is extremely predictable and repeatable, not stuttery. Reply ↓
- lliamander on 2017-12-18 at 19:28:04 said: > Not sure how to apply it to a statically typed language though.
Clean is probably what you would be looking for: https://en.wikipedia.org/wiki/Clean_(programming_language) Reply ↓
- Jeff Read on 2017-12-19 at 00:38:57 said: If Lutz was inspired by Linear Lisp, he didn't cite it. Actually ORO is more like region-based memory allocation with a single region: values which leave the current scope are copied which can be slow if you're passing large lists or vectors around.
Linear Lisp is something quite a bit different, and allows for arbitrary data structures with arbitrarily deep linking within, so long as there are no cycles in the data structures. You can even pass references into and out of functions if you like; what you can't do is alias them. As for statically typed programming languages well, there are linear type systems , which as lliamander mentioned are implemented in Clean.
Newlisp in general is smack in the middle between Rust and Urbit in terms of cultishness of its community, and that scares me right off it. That and it doesn't really bring anything to the table that couldn't be had by "old" lisps (and Lutz frequently doubles down on mistakes in the design that had been discovered and corrected decades ago by "old" Lisp implementers). Reply ↓
- Gary E. Miller on 2017-12-18 at 18:02:10 said: For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath. One probable reason Ken Thompson had to invent Go is to go around the tremendous difficulty in getting new stuff into C. Reply ↓
- esr on 2017-12-18 at 18:40:53 said: >For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath.
Yeah, good idea not to. People as smart/skilled as you and me have been poking at this problem since the 1980s and it's pretty easy to show that you can't do better than Boehm–Demers–Weiser, which has limitations that make it impractical. Sigh Reply ↓
- John Cowan on 2018-04-15 at 00:11:56 said: What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well. Reply ↓
- esr on 2018-04-15 at 08:30:12 said: >What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well.
GCing data on the stack is a crapshoot. Pointers can get mistaken for data and vice-versa. Reply ↓
- Konstantin Khomoutov on 2017-12-20 at 06:30:05 said: I think it's not about C. Let me cite a little bit from "The Go Programming Language" (A.Donovan, B. Kernigan) --
in the section about Go influences, it states:"Rob Pike and others began to experiment with CSP implementations as actual languages. The first was called Squeak which provided a language with statically created channels. This was followed by Newsqueak, which offered C-like statement and expression syntax and Pascal-like type notation. It was a purely functional language with garbage collection, again aimed at managing keyboard, mouse, and window events. Channels became first-class values, dynamically created and storable in variables.
The Plan 9 operating system carried these ideas forward in a language called Alef. Alef tried to make Newsqueak a viable system programming language, but its omission of garbage collection made concurrency too painful."
So my takeaway was that AMM was key to get proper concurrency.
Before Go, I dabbled with Erlang (which I enjoy, too), and I'd say there the AMM is also a key to have concurrency made easy.(Update: the ellipsises I put into the citation were eaten by the engine and won't appear when I tried to re-edit my comment; sorry.) Reply ↓
- tz on 2017-12-18 at 18:29:20 said: I think this is the key insight.
There are programs with zero MM.
There are programs with orderly MM, e.g. unzip does mallocs and frees in a stacklike formation, Malloc a,b,c, free c,b,a. (as of 1.1.4). This is laminar, not chaotic flow.Then there is the complex, nonlinear, turbulent flow, chaos. You can't do that in basic C, you need AMM. But it is easier in a language that includes it (and does it well).
Virtual Memory is related to AMM – too often the memory leaks were hidden (think of your O(n**2) for small values of n) – small leaks that weren't visible under ordinary circumstances.
Still, you aren't going to get AMM on the current Arduino variants. At least not easily.
That is where the line is, how much resources. Because you require a medium to large OS, or the equivalent resources to do AMM.
Yet this is similar to using FPGAs, or GPUs for blockchain coin mining instead of the CPU. Sometimes you have to go big. Your Cooper Mini might be great most of the time, but sometimes you need a Diesel big pickup. I think a Mini would fit in the bed of my F250.
As tasks get bigger they need bigger machines. Reply ↓
- Zygo on 2017-12-18 at 18:31:34 said: > Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.
I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that. Reply ↓
- esr on 2017-12-18 at 18:34:59 said: >I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that.
Well, yeah. I have days like that occasionally, but it would be unwise to plan a project based on the assumption that I will. And deeply foolish to assume that J. Random Programmer will. Reply ↓
- tz on 2017-12-18 at 18:32:37 said: C displaced assembler because it had the speed and flexibility while being portable.
Go, or something like it will displace C where they can get just the right features into the standard library including AMM/GC.
Maybe we need Garbage Collecting C. GCC?
One problem is you can't do the pointer aliasing if you have a GC (unless you also do some auxillary bits which would be hard to maintain). void x = y; might be decodable but there are deeper and more complex things a compiler can't detect. If the compiler gets it wrong, you get a memory leak, or have to constrain the language to prevent things which manipulate pointers when that is required or clearer. Reply ↓
- Zygo on 2017-12-18 at 20:52:40 said: C++11 shared_ptr does handle the aliasing case. Each pointer object has two fields, one for the thing being pointed to, and one for the thing's containing object (or its associated GC metadata). A pointer alias assignment alters the former during the assignment and copies the latter verbatim. The syntax is (as far as a C programmer knows, after a few typedefs) identical to C.
The trouble with applying that idea to C is that the standard pointers don't have space or time for the second field, and heap management isn't standardized at all (free() is provided, but programs are not required to use it or any other function exclusively for this purpose). Change either of those two things and the resulting language becomes very different from C. Reply ↓
- IGnatius T Foobar on 2017-12-18 at 18:39:28 said: Eric, I love you, you're a pepper, but you have a bad habit of painting a portrait of J. Random Hacker that is actually a portrait of Eric S. Raymond. The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit. Reply ↓
- esr on 2017-12-18 at 18:55:46 said: >The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit.
Even if you're right, the escalation of complexity means that what I'm facing now, J. Random Hacker will face in a couple of years. Yes, not everybody writes reposurgeon but a string class won't suffice for much longer even if it does today. Reply ↓
- tz on 2017-12-18 at 19:27:12 said: Here's another key.
I once had a sign:
I don't solve complex problems.
I simplify complex problems and solve them.Complexity does escalate, or at least in the sense that we could cross oceans a few centuries ago, and can go to the planets and beyond today.
We shouldn't use a rocket ship to get groceries from the local market.
J Random H-1B will face some easily decomposed apparently complex problem and write a pile of spaghetti.
The true nature of a hacker is not so much in being able to handle the most deep and complex situations, but in being able to recognize which situations are truly complex and in preference working hard to simplify and reduce complexity in preference to writing something to handle the complexity. Dealing with a slain dragon's corpse is easier than one that is live, annoyed, and immolating anything within a few hundred yards. Some are capable of handling the latter. The wise knight prefers to reduce the problem to the former. Reply ↓
- William O. B'Livion on 2017-12-20 at 02:02:40 said: > J Random H-1B will face some easily decomposed
> apparently complex problem and write a pile of spaghetti.J Random H-1B will do it with Informatica and Java. Reply ↓
- tz on 2017-12-18 at 18:42:33 said: I will add one last "perils of java school" comment.
One of the epic fails of C++ is it being sold as C but where anyone could program because of all the safetys. Instead it created bloatware and the very memory leaks because the lesser programmers didn't KNOW (grok, understand) what they were doing. It was all "automatic".
This is the opportunity and danger of AMM/GC. It is a tool, and one with hot areas and sharp edges. Wendy (formerly Walter) Carlos had a law that said "Whatever parameter you can control, you must control". Having a really good AMM/GC requires you to respect what it can and cannot do. OK, form a huge – into VM – linked list. Won't it just handle everything? NO!. You have to think reference counts, at least in the back of your mind. It simplifys the problem but doesn't eliminate it. It turns the black hole into a pulsar, but you still can be hit.
Many will gloss over and either superficially learn (but can't apply) or ignore the "how to use automatic memory management" in their CS course. Like they didn't bother with pointers, recursion, or multithreading subtleties. Reply ↓
- lliamander on 2017-12-18 at 19:36:35 said: I would say that there is a parallel between concurrency models and memory management approaches. Beyond a certain level of complexity, it's simply infeasible for J. Random Hacker to implement a locks-based solution just as it is infeasible for Mr. Hacker to write a solution with manual memory management.
My worry is that by allowing the unsafe sharing of mutable state between goroutines, Go will never be able to achieve the per-process (i.e. language-level process, not OS-level) GC that would allow for really low latencies necessary for a AMM language to move closer into the kernel space. But certainly insofar as many "systems" level applications don't require extremely low latencies, Go will probably viable solution going forward. Reply ↓
- Jeff Read on 2017-12-18 at 20:14:18 said: Putting aside the hard deadlines found in real-time systems programming, it has been empirically determined that a GC'd program requires five times as much memory as the equivalent program with explicit memory management. Applications which are both CPU- and RAM-intensive, where you need to have your performance cake and eat it in as little memory as possible, are thus severely constrained in terms of viable languages they could be implemented in. And by "severely constrained" I mean you get your choice of C++ or Rust. (C, Pascal, and Ada are on the table, but none offer quite the same metaprogramming flexibility as those two.)
I think your problems with reposturgeon stem from the fact that you're just running up against the hard upper bound on the vector sum of CPU and RAM efficiency that a dynamic language like Python (even sped up with PyPy) can feasibly deliver on a hardware configuration you can order from Amazon. For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII. Reply ↓
- esr on 2017-12-18 at 20:27:20 said: > For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII.
How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it? Reply ↓
- Jeff Read on 2017-12-18 at 20:48:24 said: You mentioned that reposurgeon wouldn't have been written under the constraints of C. But C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp, especially if you avail yourself of those wonderful features in C++1x. C++ programmers talk about "zero-cost abstractions" for a reason .
Semantically, programming in a GC'd language and programming in a language that uses smart pointers and RAII are very similar: you create the objects you need, and they are automatically disposed of when no longer needed. But instead of delegating to a GC which cleans them up whenever, both you and the compiler have compile-time knowledge of when those cleanups will take place, allowing you finer-grained control over how memory -- or any other resource -- is used.
Oh, that's another thing: GC only has something to say about memory -- not file handles, sockets, or any other resource. In C++, with appropriate types value semantics can be made to apply to those too and they will immediately be destructed after their last use. There is no special
with
construct in C++; you simply construct the objects you need and they're destructed when they go out of scope.This is how the big boys do systems programming. Again, Go has barely displaced C++ at all inside Google despite being intended for just that purpose. Their entire critical path in search is still C++ code. And it always will be until Rust gains traction.
As for my Lisp experience, I know enough to know that Lisp has utterly failed and this is one of the major reasons why. It's not even a decent AI language, because the scruffies won, AI is basically large-scale statistics, and most practitioners these days use C++. Reply ↓
- esr on 2017-12-18 at 20:54:08 said: >C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp,
Oh, bullshit. I think you're just trolling, now.
I've been a C++ programmer and know better than this.
But don't argue with me. Argue with Ken Thompson, who designed Go because he knows better than this. Reply ↓
- Anthony Williams on 2017-12-19 at 06:02:03 said: Modern C++ is a long way from C++ when it was first standardized in 1998. You should *never* be manually managing memory in modern C++. You want a dynamically sized array? Use std::vector. You want an adhoc graph? Use std::shared_ptr and std::weak_ptr.
Any code I see which uses new or delete, malloc or free will fail code review.
Destructors and the RAII idiom mean that this covers *any* resource, not just memory.
See the C++ Core Guidelines on resource and memory management: http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource Reply ↓
- esr on 2017-12-19 at 07:53:58 said: >Modern C++ is a long way from C++ when it was first standardized in 1998.
That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad .
My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like. Reply ↓
- Anthony Williams on 2017-12-19 at 08:17:58 said: > My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like.
C++ in 2009 with boost was C++ from 1998 with a few extra libraries. I mean that quite literally -- the standard was unchanged apart from minor fixes in 2003.
C++ has changed a lot since then. There have been 3 standards issued, in 2011, 2014, and just now in 2017. Between them, there is a huge list of changes to the language and the standard library, and these are readily available -- both clang and gcc have kept up-to-date with the changes, and even MSVC isn't far behind. Even more changes are coming with C++20.
So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.
If you are interested, browse the wikipedia entries: https://en.wikipedia.org/wiki/C%2B%2B11 https://en.wikipedia.org/wiki/C%2B%2B14 and https://en.wikipedia.org/wiki/C%2B%2B17 along with articles like https://blog.smartbear.com/development/the-biggest-changes-in-c11-and-why-you-should-care/ http://www.drdobbs.com/cpp/the-c14-standard-what-you-need-to-know/240169034 and https://isocpp.org/files/papers/p0636r0.html Reply ↓
- esr on 2017-12-19 at 08:37:11 said: >So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.
But the foundational abstractions are still leaky. So when you tell me "it's all better now", I don't believe you. I just plain do not.
I've been hearing this soothing song ever since around 1989. "Trust us, it's all fixed." Then I look at the "fixes" and they're horrifying monstrosities like templates – all the dangers of preprocessor macros and a whole new class of Turing-complete nightmares, too! In thirty years I'm certain I'll be hearing that C++2047 solves all the problems this time for sure , and I won't believe a word of it then, either. Reply ↓
- Anthony Williams on 2017-12-19 at 08:45:34 said: > But the foundational abstractions are still leaky.
If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about? Reply ↓
- esr on 2017-12-19 at 09:26:24 said: >If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about?
Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.
Are bare, untyped pointers still in the language? Never mind, I know the answer.
Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.
Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.
- Anthony Williams on 2017-12-19 at 09:54:51 said: Thank you for the list of concerns.
> Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.
You are right, bare arrays are not bounds-checked, but std::array provides an at() member function, so arr.at(3) will throw if the array is too small.
Also, ranged-for loops can avoid the need for explicit indexing lots of the time anyway.
> Are bare, untyped pointers still in the language? Never mind, I know the answer.
Yes, void* is still in the language. You need to cast it to use it, which is something that is easy to spot in a code review.
> Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.
Probably. Is it possible to write code in any language that dies horribly in an unintended fashion?
> Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.
Did I say C++ was perfect? This blog post was about the problems inherent in the lack of automatic memory management in C and C++, and thus why you wouldn't have written reposurgeon if that's all you had. My point is that it is easy to write C++ in a way that doesn't suffer from those problems.
- esr on 2017-12-19 at 10:10:11 said: > My point is that it is easy to write C++ in a way that doesn't suffer from those problems.
No, it is not. The error statistics of large C++ programs refute you.
My personal experience on Battle for Wesnoth refutes you.
The persistent self-deception I hear from C++ advocates on this score does nothing to endear the language to me.
- Ian Bruene on 2017-12-19 at 11:05:22 said: So what I am hearing in this is: "Use these new standards built on top of the language, and make sure every single one of your dependencies holds to them just as religiously are you are. And if anyone fails at any point in the chain you are doomed.".
Cool.
- Casey Barker on 2017-12-19 at 11:12:16 said: Using Go has been a revelation, so I mostly agree with Eric here. My only objection is to equating C++03/Boost with "modern" C++. I used both heavily, and given a green field, I would consider C++14 for some of these thorny designs that I'd never have used C++03/Boost for. It's a qualitatively different experience. Just browse a copy of Scott Meyer's _Effective Modern C++_ for a few minutes, and I think you'll at least understand why C++14 users object to the comparison. Modern C++ enables better designs.
Alas, C++ is a multi-layered tool chest. If you stick to the top two shelves, you can build large-scale, complex designs with pretty good safety and nigh unmatched performance. Everything below the third shelf has rusty tools with exposed wires and no blade guards, and on large-scale projects, it's impossible to keep J. Random Programmer from reaching for those tools.
So no, if they keep adding features, C++ 2047 won't materially improve this situation. But there is a contingent (including Meyers) pushing for the *removal* of features. I think that's the only way C++ will stay relevant in the long-term.
http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html Reply ↓- Zygo on 2017-12-19 at 11:52:17 said: My personal experience is that C++11 code (in particular, code that uses closures, deleted methods, auto (a feature you yourself recommended for C with different syntax), and the automatic memory and resource management classes) has fewer defects per developer-year than the equivalent C++03-and-earlier code.
This is especially so if you turn on compiler flags that disable the legacy features (e.g. -Werror=old-style-cast), and treat any legacy C or C++03 code like foreign language code that needs to be buried under a FFI to make it safe to use.
Qualitatively, the defects that do occur are easier to debug in C++11 vs C++03. There are fewer opportunities for the compiler to interpolate in surprising ways because the automatic rules are tighter, the library has better utility classes that make overloads and premature optimization less necessary, the core language has features that make templates less necessary, and it's now possible to explicitly select or rule out invalid candidates for automatic code generation.
I can design in Lisp, but write C++11 without much effort of mental translation. Contrast with C++03, where people usually just write all the Lispy bits in some completely separate language (or create shambling horrors like Boost to try to bandaid over the missing limbs boost::lambda, anyone?
Oh, look, since C++11 they've doubled down on something called boost::phoenix).Does C++11 solve all the problems? Absolutely not, that would break compatibility. But C++11 is noticeably better than its predecessors. I would say the defect rates are now comparable to Perl with a bunch of custom C modules (i.e. exact defect rate depends on how much you wrote in each language). Reply ↓
- NHO on 2017-12-19 at 11:55:11 said: C++ happily turned into complexity metatarpit with "Everything that could be implemented in STL with templates should, instead of core language". And not deprecating/removing features, instead leaving there. Reply ↓
- Michael on 2017-12-19 at 08:59:41 said: For the curious, can you point to a C++ tutorial/intro that shows how to do it the right way ? Reply ↓
- Anthony Williams on 2017-12-19 at 09:58:12 said: I would suggest checking out the C++ Core Guidelines http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines Reply ↓
- Michael on 2017-12-19 at 12:09:45 said: Thank you. Not sure this is what I was looking for.
Was thinking more along the lines of "Learning Python" equivalent.
- Anthony Williams on 2017-12-19 at 08:26:13 said: > That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad.
I agree that there is a lot of complexity in C++. That doesn't mean you have to use all of it. Yes, it makes maintaining legacy code harder, because the older code might use dangerous or complex parts, but for new code we can avoid the danger, and just stick to the simple, safe parts.
The complexity isn't all bad, though. Part of the complexity arises by providing the ability to express more complex things in the language. This can then be used to provide something simple to the user.
Take std::variant as an example. This is a new facility from C++17 that provides a type-safe discriminated variant. If you have a variant that could hold an int or a string and you store an int in it, then attempting to access it as a string will cause an exception rather than a silent error. The code that *implements* std::variant is complex. The code that uses it is simple. Reply ↓
- Jeff Read on 2017-12-20 at 09:07:06 said: I won't argue with you. C++ is error-prone (albeit less so than C) and horrid to work in. But for certain classes of algorithmically complex, CPU- and RAM-intensive problems it is literally the only viable choice. And it looks like performing surgery on GCC-scale repos falls into that class of problem.
I'm not even saying it was a bad idea to initially write reposurgeon in Python. Python and even Ruby are great languages to write prototypes or even small-scale production versions of things because of how rapidly they may be changed while you're hammering out the details. But scale comes around to bite you in the ass sooner than most people think and when it does, your choice of language hobbles you in a way that can't be compensated for by throwing more silicon at the problem. And it's in that niche where C++ and Rust dominate, absolutely uncontested. Reply ↓
- jim on 2017-12-22 at 06:41:27 said: If you found rust hard going, you are not a C++ programmer who knows better than this.
You were writing C in C++ Reply ↓
- Anthony Williams on 2017-12-19 at 06:15:12 said: > How many times do I have to repeat "reposurgeon would never have been
> written under that constraint" before somebody who claims LISP
> experience gets it?That speaks to your lack of experience with modern C++, rather than an inherent limitation. *You* might not have written reposurgeon under that constraint, because *you* don't feel comfortable that you wouldn't have ended up with a black-hole of AMM. That does not mean that others wouldn't have or couldn't have, and that their code would necessarily be an unmaintainable black hole.
In well-written modern C++, memory management errors are a solved problem. You can just write code, and know that the compiler and library will take care of cleaning up for you, just like with a GC-based system, but with the added benefit that it's deterministic, and can handle non-memory resources such as file handles and sockets too. Reply ↓
- esr on 2017-12-19 at 07:59:30 said: >In well-written modern C++, memory management errors are a solved problem
In well-written assembler memory management errors are a solved problem. I hate this idiotic cant repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue. Reply ↓
- Anthony Williams on 2017-12-19 at 08:08:53 said: > I hate this idiotic repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue.
It's not about being "just good enough". It's about *not* using the dangerous parts. If you never use manual memory management, then you can't forget to free, for example, and automatic memory management is *easy* to use. std::string is a darn sight easier to use than the C string functions, for example, and std::vector is a darn sight easier to use than dynamic arrays with new. In both cases, the runtime manages the memory, and it is *easier* to use than the dangerous version.
Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems. The same is true with C++.
The fact that historically there are areas where C++ didn't provide a good solution, and thus there are programs that don't use the modern solution, and experience the consequential problems is not an inherent problem with the language, but it does make it harder to educate people. Reply ↓
- John D. Bell on 2017-12-19 at 10:48:09 said: > It's about *not* using the dangerous parts. Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems.
Why not use a language that doesn't have "'dangerous' features"?
NOTES: [1] I am not saying that Go is necessarily that language – I am not even saying that any existing language is necessarily that language.
[2] /me is being someplace between naive and trolling here. Reply ↓
- esr on 2017-12-19 at 11:10:15 said: >Why not use a language that doesn't have "'dangerous' features"?
Historically, it was because hardware was weak and expensive – you couldn't afford the overhead imposed by those languages. Now it's because the culture of software engineering has bad habits formed in those days and reflexively flinches from using higher-overhead safe languages, though it should not. Reply ↓
- Paul R on 2017-12-19 at 12:30:42 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.
To be provocative, I think there are two types of programmer, the ones who watch Eric Niebler on Ranges https://www.youtube.com/watch?v=mFUXNMfaciE&t=4230s and think 'Wow, I want to find out more!' and the rest. The rest can have Go and Rust
D of course is the baby elephant in the room, worth much more attention than it gets. Reply ↓
- Michael on 2017-12-19 at 12:53:33 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.
Because you can't get runtime efficiency in any other language?
Because you can't innovate in any other language? Reply ↓
- Paul R on 2017-12-19 at 13:50:56 said: Obviously not.
Our three main advantages, runtime efficiency, innovation opportunity, building on a base of millions of lines of code that run the internet and an international standard.
Our four main advantages
More seriously, C++ enabled the STL, the STL transforms the approach of its users, with much increased reliability and readability, but no loss of performance. And at the same time your old code still runs. Now that is old stuff, and STL2 is on the way. Evolution.
That's five. Damn Reply ↓
- Zygo on 2017-12-19 at 14:14:42 said: > Because you can't innovate in any other language?
That claim sounded odd to me too. C++ looks like the place that well-proven features of younger languages go to die and become fossilized. The standardization process would seem to require it. Reply ↓
- Paul R on 2017-12-20 at 06:27:47 said: Such as?
My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.
Generic programming being a prime example.
- Michael on 2017-12-20 at 08:19:41 said: My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.
Are you sure it's that, or is it more the fact that the standards committee has forever had a me-too kitchen-sink no-feature-left-behind obsession?
(Makes me wonder if it doesn't share some DNA with the featuritis that has been Microsoft's calling card for so long. – they grew up together.)
- Paul R on 2017-12-20 at 11:13:20 said: No, because people come to the standards committee with ideas, and you cannot have too many libraries. You don't pay for what you don't use. Prime directive C++.
- Michael on 2017-12-20 at 11:35:06 said: and you cannot have too many libraries. You don't pay for what you don't use.
And this, I suspect, is the primary weakness in your perspective.
Is the defect rate of C++ code better or worse because of that?
- Paul R on 2017-12-20 at 15:49:29 said: The rate is obviously lower because I've written less code and library code only survives if it is sound. Are you suggesting that reusing code is a bad idea? Or that an indeterminate number of reimplementations of the same functionality is a good thing?
You're not on the most productive path to effective criticism of C++ here.
- Michael on 2017-12-20 at 17:40:45 said: The rate is obviously lower because I've written less code
Please reconsider that statement in light of how defect rates are measured.
Are you suggesting..
Arguing strawmen and words you put in someone's mouth is not the most productive path to effective defense of C++.
But thank you for the discussion.
- Paul R on 2017-12-20 at 18:46:53 said: This column is too narrow to have a decent discussion. WordPress should rewrite in C++ or I should dig out my Latin dictionary.
Seriously, extending the reach of libraries that become standardised is hard to criticise, extending the reach of the core language is.
It used to be a thing that C didn't have built in functionality for I/O (for example) rather it was supplied by libraries written in C interfacing to a lower level system interface. This principle seems to have been thrown out of the window for Go and the others. I'm not sure that's a long term win. YMMV.
But use what you like or what your cannot talk your employer out of using, or what you can get a job using. As long as it's not Rust.
- Zygo on 2017-12-19 at 12:24:25 said: > Well-written programs in a given language don't use the dangerous features
Some languages have dangerous features that are disabled by default and must be explicitly enabled prior to use. C++ should become one of those languages.
I am very fond of the 'override' keyword in C++11, which allows me to say "I think this virtual method overrides something, and don't compile the code if I'm wrong about that." Making that assertion incorrectly was a huge source of C++ errors for me back in the days when I still used C++ virtual methods instead of lambdas. C++11 solved that problem two completely different ways: one informs me when I make a mistake, and the other makes it impossible to be wrong.
Arguably, one should be able to annotate any C++ block and say "there shall be no manipulation of bare pointers here" or "all array access shall be bounds-checked here" or even " and that's the default for the entire compilation unit." GCC can already emit warnings for these without human help in some cases. Reply ↓
- Kevin S. Van Horn on 2017-12-20 at 12:20:14 said: Is this a good summary of your objections to C++ smart pointers as a solution to AMM?
1. Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.
2. Subversion of AMM. Bare news and deletes are still available, so some later maintenance programmer could still introduce memory leaks. You could forbid the use of bare new and delete in your project, and write a check-in hook to look for violations of the policy, but that's one more complication to worry about and it would be difficult to impossible to implement reliably due to macros and the generally difficulty of parsing C++.
3. Memory corruption. It's too easy to overrun the end of arrays, treat a pointer to a single object as an array pointer, or otherwise corrupt memory. Reply ↓
- esr on 2017-12-20 at 15:51:55 said: >Is this a good summary of your objections to C++ smart pointers as a solution to AMM?
That is at least a large subset of my objections, and probably the most important ones. Reply ↓
- jim on 2017-12-22 at 07:15:20 said: It is uncommon to find a cyclic graph that cannot be rendered acyclic by weak pointers.
C++17 cheerfully breaks backward compatibility by removing some dangerous idioms, refusing to compile code that should never have been written. Reply ↓
- guest on 2017-12-20 at 19:12:01 said: > Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.
General graphs with possibly-cyclical references are precisely the workload GC was created to deal with optimally, so ESR is right in a sense that reposturgeon _requires_ a GC-capable language to work. In most other programs, you'd still want to make sure that the extent of the resources that are under GC-control is properly contained (which a Rust-like language would help a lot with) but it's possible that even this is not quite worthwhile for reposturgeon. Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that. Reply ↓
- esr on 2017-12-20 at 20:12:49 said: >Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that.
Er, there's any language that does allow it? Reply ↓
- Jeff Read on 2017-12-27 at 20:58:43 said: Yes -- ahem -- C++. That's why it's pretty much the only language taken seriously by game developers. Reply ↓
- Zygo on 2017-12-21 at 12:56:20 said: > I'm guessing that reposurgeon's graphs are almost never DAGs
Why would reposurgeon's graphs not be DAGs? Some exotic case that comes up with e.g. CVS imports that never arises in a SVN->Git conversion (admittedly the only case I've really looked deeply at)?
Git repos, at least, are cannot-be-cyclic-without-astronomical-effort graphs (assuming no significant advances in SHA1 cracking and no grafts–and even then, all you have to do is detect the cycle and error out). I don't know how a generic revision history data structure could contain a cycle anywhere even if I wanted to force one in somehow. Reply ↓
- esr on 2017-12-21 at 15:13:18 said: >Why would reposurgeon's graphs not be DAGs?
The repo graph is, but a lot of the structures have reference loops for fast lookup. For example, a blob instance has a pointer back to the containing repo, as well as being part of the repo through a pointer chain that goes from the repo object to a list of commits to a blob.
Without those loops, navigation in the repo structure would get very expensive. Reply ↓
- guest on 2017-12-21 at 15:22:32 said: Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data, whereas the "true" DAG references do? In that case, and assuming you can be sufficiently sure that only DAGs will happen, refcounting (ideally using something like Rust) would very likely be the most efficient choice. No need for a fully-general GC here. Reply ↓
- esr on 2017-12-21 at 15:34:40 said: >Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data
I think they do. Unless you're using "ownership" in some sense I don't understand. Reply ↓
- jim on 2017-12-22 at 07:31:39 said: A weak pointer does not own the object it points to. A shared pointer does.
When there are are zero shared pointers pointing to an object, it gets freed, regardless of how many weak pointers are pointing to it.
Shared pointers and unique pointers own, weak pointers do not own. Reply ↓
- jim on 2017-12-22 at 07:23:35 said: In C++11, one would implement a pointer back to the owning object as a weak pointer. Reply ↓
- jim on 2017-12-23 at 00:40:36 said:
> How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it?
Maybe it is true, but since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers, we hear you say that you would never write reposurgeon would never under that constraint.
Which, since no one else is writing reposurgeon, is an argument, but not an argument that those who do get weak pointers and rust scopes find all that convincing.
I am inclined to think that those who write C++98 (which is the gcc default) could not write reposurgeon under that constraint, but those who write C++11 could write reposurgeon under that constraint, and except for some rather unintelligible, complicated, and twisted class constructors invoking and enforcing the C++11 automatic memory management system, it would look very similar to your existing python code. Reply ↓
- esr on 2017-12-23 at 02:49:13 said: >since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers
Thank you, I understand those concepts quite well. I simply prefer to apply them in languages not made of barbed wire and landmines. Reply ↓
- guest on 2017-12-23 at 07:11:48 said: I'm sure that you understand the _gist_ of all of these notions quite accurately, and this alone is of course quite impressive for any developer – but this is not quite the same as being comprehensively aware of their subtler implications. For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.
For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language. You've criticized Rust before (not without some justification!) for having half-baked async-IO facilities, but I would think that reposturgeon does not depend significantly on async-IO. Reply ↓
- esr on 2017-12-23 at 08:14:25 said: >For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.
Yes, I got that between the time I wrote my first reply and JAD brought it up. I've used Python weakrefs in similar situations. I would have seemed less dense if I'd had more sleep at the time.
>For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language.
That acidulousness was mainly aimed at C++. Rust, if it implements its theory correctly (a point on which I am willing to be optimistic) doesn't have C++'s fatal structural flaws. It has problems of its own which I won't rehash as I've already anatomized them in detail. Reply ↓
- Garrett on 2017-12-21 at 11:16:25 said: There's also development cost. I suspect that using eg. Python drastically reduces the cost for developing the code. And since most repositories are small enough that Eric hasn't noticed accidental O(n**2) or O(n**3) algorithms until recently, it's pretty obvious that execution time just plainly doesn't matter. Migration is going to involve a temporary interruption to service and is going to be performed roughly once per repo. The amount of time involved in just stopping the eg. SVN service and bringing up the eg. GIT hosting service is likely to be longer than the conversion time for the median conversion operation.
So in these cases, most users don't care about the run-time, and outside of a handful of examples, wouldn't brush up against the CPU or memory limitations of a whitebox PC.
This is in contrast to some other cases in which I've worked such as file-serving (where latency is measured in microseconds and is actually counted), or large data processing (where wasting resources reduces the total amount of stuff everybody can do). Reply ↓
- David Collier-Brown on 2017-12-18 at 20:20:59 said: Hmmn, I wonder if the virtual memory of Linux (and Unix, and Multics) is really the OS equivalent of the automatic memory management of application programs? One works in pages, admittedly, not bytes or groups of bytes, but one could argue that the sub-page stuff is just expensive anti-internal-fragmentation plumbing
–dave
[In polite Canajan, "I wonder" is the equivalent of saying "Hey everybody, look at this" in the US. And yes, I that's also the redneck's famous last words.] Reply ↓- John Moore on 2017-12-18 at 22:20:21 said: In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity. The memory is often allocated by code which first encounters the need, and it is then passed on through layers and at some point, encounters code which determines the memory is no longer needed. All of this creates an implicit contract that he who is handed a pointer to something (say, a buffer) becomes responsible for disposing of it. But, there may be many places where that is needed – most of them in exception handling.
That creates many, many opportunities for some to simply forget to release it. Also, when the code is handed off to someone unfamiliar, they may not even know about the contract. Crises (or bad habits) lead to failures to document this stuff (or create variable names or clear conventions that suggest one should look for the contract).
I've also done a bunch of stuff in Java, both applications level (such as a very complex Android app with concurrency) and some infrastructural stuff that wasn't as performance constrained. Of course, none of this was hard real-time although it usually at least needed to provide response within human limits, which GC sometimes caused trouble with. But, the GC was worth it, as it substantially reduced bugs which showed up only at runtime, and it simplified things.
On the side, I write hard real time stuff on tiny, RAM constrained embedded systems – PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast). In that world, there is no malloc used, and shouldn't be. It's compile time created buffers and structures for the most part. Fortunately, the applications don't require advanced dynamic structures (like symbol tables) where you need memory allocation. In that world, AMM isn't an issue. Reply ↓
- Michael on 2017-12-18 at 22:47:26 said: PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast)
LOL. Glad I'm not the only one who thought that. Most of my work was on the 16F – after I found out what it took to do a simple table lookup, I was ready for a stiff drink. Reply ↓- esr on 2017-12-18 at 23:45:03 said: >In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity.
I believe you. I think I gravitate to problems with data-structure complexity because, well, that's just the way my brain works.
But it's also true that I have never forgotten one of the earliest lessons I learned from Lisp. When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about. Reply ↓
- Jeremy on 2017-12-19 at 01:36:47 said: Perhaps its because my coding experience has mostly been short python scripts of varying degrees of quick-and-dirtiness, but I'm having trouble grokking the difference between smart code/dumb data vs dumb code/smart data. How does one tell the difference?
Now, as I type this, my intuition says it's more than just the scary mess of nested if statements being in the class definition for your data types, as opposed to the function definitions which munch on those data types; a scary mess of nested if statements is probably the former. The latter though I'm coming up blank.
Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)? Reply ↓
- jsn on 2017-12-19 at 02:35:48 said: I've always expressed it as "smart data + dumb logic = win".
You almost said my favorite canned example: a big conditional block vs. a lookup table. The LUT can replace all the conditional logic with structured data and shorter (simpler, less bug-prone, faster, easier to read) unconditional logic that merely does the lookup. Concretely in Python, imagine a long list of "if this, assign that" replaced by a lookup into a dictionary. It's still all "code", but the amount of program logic is reduced.
So I would answer your first question by saying look for places where data structures are used. Then guesstimate how complex some logic would have to be to replace that data. If that complexity would outstrip that of the data itself, then you have a "smart data" situation. Reply ↓
- Emanuel Rylke on 2017-12-19 at 04:07:58 said: To expand on this, it can even be worth to use complex code to generate that dumb lookup table. This is so because the code generating the lookup table runs before, and therefore separately, from the code using the LUT. This means that both can be considered in isolation more often; bringing the combined complexity closer to m+n than m*n. Reply ↓
- TheDividualist on 2017-12-19 at 05:39:39 said: Admittedly I have an SQL hammer and think everything is a nail, but why not would *every* program include a database, like the SQLLite that even comes bundled with Python distros, no sweat, and put that lookup table into it, not in a dictionary inside the code?
Of course the more you go in this direction the more problems you will have with unit testing, in case you want to do such a thing. Generally we SQL-hammer guys don't do that much, because in theory any fuction can read any part of the database, making the whole database the potential "inputs" for every function.
That is pretty lousy design, but I think good design patterns for separations of concerns and unit testability are not yet really known for database driven software, I mean, for example, model-view-controller claims to be one, but actually fails as these can and should call each other. So you have in the "customer" model or controller a function to check if the customer has unpaid invoices, and decide to call it from the "sales order" controller or model to ensure such customers get no new orders registered. In the same "sales order" controller you also check the "product" model or controller if it is not a discontinued product and check the "user" model or controller if they have the proper rights for this operation and the "state" controller if you are even offering this product in that state and so on a gazillion other things, so if you wanted to automatically unit test that "register a new sales order" function you have a potential "input" space of half the database. And all that with good separation of concerns MVC patterns. So I think no one really figured this out yet? Reply ↓
- guest on 2017-12-20 at 19:21:13 said: There's a reason not to do this if you can help it – dispatching through a non-constant LUT is way slower than running easily-predicted conditionals. Like, an order of magnitude slower, or even worse. Reply ↓
- esr on 2017-12-19 at 07:45:38 said: >Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)?
I do not have an instant answer, sorry. I'll hand that question to my backbrain and hope an answer pops up. Reply ↓
- Jon Brase on 2017-12-20 at 00:54:15 said: When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about.
Doesn't "dumb code munching smart data" really reduce to "dumb code implementing a virtual machine that runs a different sort of dumb code to munch dumb data"? Reply ↓
- jim on 2017-12-22 at 20:25:07 said: "Smart Data" is effectively a domain specific language.
A domain specific language is easier to reason about within its proper domain, because it lowers the difference between the problem and the representation of the problem. Reply ↓
- wisd0me on 2017-12-19 at 02:35:10 said: I wonder why you talked about inventing an AMM-layer so much, but told nothing about the GC, which is available for C language. Why do you need to invent some AMM-layer in the first place, instead of just using the GC?
For example, Bigloo Scheme and The GNU Objective C runtime successfully used it, among many others. Reply ↓- Walter Bright on 2017-12-19 at 04:53:13 said: Maybe D, with its support for mixed GC / manual memory allocation is the right path after all! Reply ↓
- Jeremy Bowers on 2017-12-19 at 10:40:24 said: Rust seems like a good fit for the cases where you need the low latency (and other speed considerations) and can't afford the automation. Firefox finally got to benefit from that in the Quantum release, and there's more coming. I wouldn't dream of writing a browser engine in Go, let alone a highly-concurrent one. When you're willing to spend on that sort of quality, Rust is a good tool to get there.
But the very characteristics necessary to be good in that space will prevent it from becoming the "default language" the way C was for so long. As much fun as it would be to fantasize about teaching Rust as a first language, I think that's crazy talk for anything but maybe MIT. (And I'm not claiming it's a good idea even then; just saying that's the level of student it would take for it to even be possible .) Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already. (I keep hearing about how Swift is going to be usable off of Apple platforms real soon now just around the corner just a bit longer .) Reply ↓
- esr on 2017-12-19 at 17:30:07 said: >Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already.
I really think the significance of Go being an easy step up from C cannot be overestimated – see my previous blogging about the role of inward transition costs.
Ken Thompson is insidiously clever. I like channels and subroutines and := but the really consequential hack in Go's design is the way it is almost perfectly designed to co-opt people like me – that is, experienced C programmers who have figured out that ad-hoc AMM is a disaster area. Reply ↓
- Jeff Read on 2017-12-20 at 08:58:23 said: Go probably owes as much to Rob Pike and Phil Winterbottom for its design as it does to Thompson -- because it's basically Alef with the feature whose lack, according to Pike, basically killed Alef: garbage collection.
I don't know that it's "insidiously clever" to add concurrency primitives and GC to a C-like language, as concurrency and memory management were the two obvious banes of every C programmer's existence back in the 90s -- so if Go is "insidiously clever", so is Java. IMHO it's just smart, savvy design which is no small thing; languages are really hard to get right. And in the space Go thrives in, Go gets a lot right. Reply ↓
- John G on 2017-12-19 at 14:01:09 said: Eric, have you looked into D *lately*? These days:
* it's fully open source (Boost license),
* there's [three back-ends to choose from]( https://dlang.org/download.html ),
* there's [exactly one standard library]( https://dlang.org/phobos/index.html ), and
* it's got a standard package repository and management tool ([dub]( https://code.dlang.org/ )). Reply ↓
- esr on 2017-12-19 at 15:14:02 said: >Eric, have you looked into D *lately*?
No. What's it got that Go does not?
That's not intended as a hostile question, I'm trying to figure out where to focus my attention when I read up on it. Reply ↓
- PInver on 2017-12-19 at 15:36:18 said: D take safety seriously, like Rust, but with a human usable approach, see:
https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.mdD template system is simply the most powerful and simple template system I've seen so far, see:
https://github.com/PhilippeSigaud/D-templates-tutorialThe garbage collector can be tamed, see:
https://wiki.dlang.org/DIP60A minimal, no-runtime, subset can be used, suitable for being a replacement for C code, see:
https://dlang.org/spec/betterc.html Reply ↓
- xenon325 on 2017-12-20 at 12:15:01 said: Good list. I would add to that:
First, `pure` functions and transitive `const`, which make code so much easier to reason about
Second. Allmost entire language is available at compile time. That, combined with templates, enables crazy (in a good way) stuff, like building optimized state machine for regex at compile-time. Given, regex pattern is known at compile time, of course. But that's pretty common.
Can't find it now, but there were bechmarks, which show it's faster than any run-time built regex engine out there. Still, source code is pretty straightforward – one don't have to be Einstein to write code like that [1].There is a talk by Andrei Alexandrescu called "Fastware" were he show how various metaprogramming facilities enable useful optimizations [2].
And a more recent talk, "Design By Introspection" [3], were he shows how these facilities enable much more compact designs and implementaions.[1] https://github.com/dlang/phobos/blob/master/std/regex/package.d#L426
[2] https://youtu.be/AxnotgLql0k
[3] video: https://youtu.be/29h6jGtZD-U?t=1m6s
slides: https://dconf.org/2017/talks/alexandrescu.pdf Reply ↓- John G on 2017-12-19 at 15:45:27 said: > > Eric, have you looked into D *lately*?
> No. What's it got that Go does not?
Not sure. I've only recently begun learning D, and I don't know Go. [The D overview]( https://dlang.org/overview.html ) may include enough for you to surmise the differences though. Reply ↓
- Doctor Mist on 2017-12-19 at 18:28:54 said:
As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.
Interesting. This certainly fits my experience.
Has anybody looked for common patterns in whatever parasitic distractions plague you when you start to reach the limits of a language with AMM? Reply ↓
- Dave taht on 2017-12-23 at 10:44:24 said: The biggest thing that I hate about go is the
result, err = whatever()
if (err) dosomethingtofixit();abstraction.
I went through a phase earlier this year where I tried to eliminate the concept of an errno entirely (and failed, in the end reinventing lisp, badly), but sometimes I still think – to the tune of flight of the valkeries – "Kill the errno, kill the errno, kill the ERRno, kill the err!' Reply ↓
- jim on 2017-12-23 at 23:37:46 said: I have on several occasions been part of big projects using languages with AMM, many programmers, much code, and they hit scaling problems and died, but it is not altogether easy to explain what the problem was.
But it was very clear that the fact that I could get a short program, or a quick fix up and running with an AMM much faster than in C or C++ was failing to translate into getting a very large program containing far too many quick fixes up and running. Reply ↓
- François-René Rideau on 2017-12-19 at 21:39:05 said: Insightful, but I think you are missing a key point about Lisp and Greenspunning.
AMM is not the only thing that Lisp brings on the table when it comes to dealing with Greenspunity. Actually, the whole point of Lisp is that there is not _one_ conceptual barrier to development, or a few, or even a lot, but that there are _arbitrarily_many_, and that is why you need be able to extend your language through _syntactic_abstraction_ to build DSLs so that every abstraction layer can be written in a language that is fit that that layer. [Actually, traditional Lisp is missing the fact that DSL tooling depends on _restriction_ as well as _extension_; but Haskell types and Racket languages show the way forward in this respect.]
That is why all languages without macros, even with AMM, remain "blub" to those who grok Lisp. Even in Go, they reinvent macros, just very badly, with various preprocessors to cope with the otherwise very low abstraction ceiling.
(Incidentally, I wouldn't say that Rust has no AMM; instead it has static AMM. It also has some support for macros.) Reply ↓
- Patrick Maupin on 2017-12-23 at 18:44:27 said: " static AMM" ???
WTF sort of abuse of language is this?
Oh, yeah, rust -- the language developed by Humpty Dumpty acolytes:
https://github.com/rust-lang/rust/pull/25640
You just can't make this stuff up. Reply ↓
- jim on 2017-12-23 at 22:02:18 said: Static AMM means that the compiler analyzes your code at compile time, and generates the appropriate frees,
Static AMM means that the compiler automatically does what you manually do in C, and semi automatically do in C++11 Reply ↓
- Patrick Maupin on 2017-12-24 at 13:36:35 said: To the extent that the compiler's insertion of calls to free() can be easily deduced from the code without special syntax, the insertion is merely an optimization of the sort of standard AMM semantics that, for example, a PyPy compiler could do.
To the extent that the compiler's ability to insert calls to free() requires the sort of special syntax about borrowing that means that the programmer has explicitly described a non-stack-based scope for the variable, the memory management isn't automatic.
Perhaps this is why a google search for "static AMM" doesn't return much. Reply ↓
- Jeff Read on 2017-12-27 at 03:01:19 said: I think you fundamentally misunderstand how borrowing works in Rust.
In Rust, as in C++ or even C, references have value semantics. That is to say any copies of a given reference are considered to be "the same". You don't have to "explicitly describe a non-stack-based scope for the variable", but the hitch is that there can be one, and only one, copy of the original reference to a variable in use at any time. In Rust this is called ownership, and only the owner of an object may mutate it.
Where borrowing comes in is that functions called by the owner of an object may borrow a reference to it. Borrowed references are read-only, and may not outlast the scope of the function that does the borrowing. So everything is still scope-based. This provides a convenient way to write functions in such a way that they don't have to worry about where the values they operate on come from or unwrap any special types, etc.
If you want the scope of a reference to outlast the function that created it, the way to do that is to use a
std::Rc
, which provides a regular, reference-counted pointer to a heap-allocated object, the same as Python.The borrow checker checks all of these invariants for you and will flag an error if they are violated. Since worrying about object lifetimes is work you have to do anyway lest you pay a steep price in performance degradation or resource leakage, you win because the borrow checker makes this job much easier.
Rust does have explicit object lifetimes, but where these are most useful is to solve the problem of how to have structures, functions, and methods that contain/return values of limited lifetime. For example declaring a
struct Foo { x: &'a i32 }
means that any instance ofstruct Foo
is valid only as long as the borrowed reference inside it is valid. The borrow checker will complain if you attempt to use such a struct outside the lifetime of the internal reference. Reply ↓- Doctor Locketopus on 2017-12-27 at 00:16:54 said: Good Lord (not to be confused with Audre Lorde). If I weren't already convinced that Rust is a cult, that would do it.
However, I must confess to some amusement about Karl Marx and Michel Foucault getting purged (presumably because Dead White Male). Reply ↓
- Jeff Read on 2017-12-27 at 02:06:40 said: This is just a cost of doing business. Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada -- "it doesn't matter if you're a brain in a jar or a superintelligent dolphin as long as your code is good" -- but when it comes to actually putting its money where its mouth is, hacker culture has fallen far short. Now that's changing, and one of the side effects of that is how we use language and communicate internally, and to the wider community, has to change.
But none of this has to do with automatic memory management. In Rust, management of memory is not only fully automatic, it's "have your cake and eat it too": you have to worry about neither releasing memory at the appropriate time, nor the severe performance costs and lack of determinism inherent in tracing GCs. You do have to be more careful in how you access the objects you've created, but the compiler will assist you with that. Think of the borrow checker as your friend, not an adversary. Reply ↓
- John on 2017-12-20 at 05:03:22 said: Present day C++ is far from C++ when it was first institutionalized in 1998. You should *never* be physically overseeing memory in present day C++. You need a powerfully measured cluster? Utilize std::vector. You need an adhoc diagram? Utilize std::shared_ptr and std::weak_ptr.
Any code I see which utilizes new or erase, malloc or through and through freedom fall flat code audit. Reply ↓
- Garrett on 2017-12-21 at 11:24:41 said: What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.
In contrast to what I think of as systems programming, you have automatic memory management. You aren't working in kernel-space. You aren't modifying the core libraries or doing significant programmatic interface design.
I'm missing something in your semantic usage and my understanding of the solution implementation. Reply ↓
- esr on 2017-12-21 at 15:08:28 said: >What makes you refer to this as a systems programming project?
Never user-facing. Often scripted. Development-support tool. Used by systems programmers.
I realize we're in an area where the "systems" vs. "application" distinction gets a little tricky to make. I hang out in that border zone a lot and have thought about this. Are GPSD and ntpd "applications"? Is giflib? Sure, they're out-of-kernel, but no end-user will ever touch them. Is GCC an application? Is apache or named?
Inside kernel is clearly systems. Outside it, I think the "systems" vs. "application" distinction is about the skillset being applied and who your expected users are than anything else.
I would not be upset at anyone who argued for a different distinction. I think you'll find the definitional questions start to get awfully slippery when you poke at them. Reply ↓
- Jeff Read on 2017-12-24 at 03:21:34 said:
What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.
When you're talking about Unix, there is often considerable overlap between "systems" and "application" programming because the architecture of Unix, with pipes, input and output redirection, etc., allowed for essential OS components to be turned into simple, data-in-data-out user-space tools. The functionality of
ls
,cp
,rm
, orcat
, for instance, would have been built into the shell of a pre-Unix OS (or many post-Unix ones). One of the great innovations of Unix is to turn these units of functionality into standalone programs, and then make spawning processes cheap enough to where using them interactively from the shell is easy and natural. This makes extending the system, as accessed through the shell, easy: just write a new, small program and add it to yourPATH
.So yeah, when you're working in an environment like Unix, there's no bright-line distinction between "systems" and "application" code, just like there's no bright-line distinction between "user" and "developer". Unix is a tool for facilitating humans working with computers. It cannot afford to discriminate, lest it lose its Unix-nature. (This is why Linux on the desktop will never be a thing, not without considerable decay in the facets of Linux that made it so great to begin with.) Reply ↓
- Peter Donis on 2017-12-21 at 22:15:44 said: @tz: you aren't going to get AMM on the current Arduino variants. At least not easily.
At the upper end you can; the Yun has 64 MB, as do the Dragino variants. You can run OpenWRT on them and use its Python (although the latest OpenWRT release, Chaos Calmer, significantly increased its storage footprint from older firmware versions), which runs fine in that memory footprint, at least for the kinds of things you're likely to do on this type of device. Reply ↓
- esr on 2017-12-21 at 22:43:57 said: >You can run OpenWRT on them and use its Python
I'd be comfortable in that environment, but if we're talking AMM languages Go would probably be a better match for it. Reply ↓
- Peter Donis on 2017-12-21 at 23:16:33 said: Go is not available as a standard package on OpenWRT, but it probably won't be too much longer before it is. Reply ↓
- Jeff Read on 2017-12-22 at 14:07:21 said: Go binaries are statically linked, so the best approach is probably to install Go on your big PC, cross compile, and push the binary out to the device. Cross-compiling is a doddle; simply set GOOS and GOARCH. Reply ↓
- Michael on 2017-12-22 at 15:07:09 said: This is one of Go's best features IMO. Reply ↓
- jim on 2017-12-22 at 06:37:36 said: C++11 has an excellent automatic memory management layer. Its only defect is that it is optional, for backwards compatibility with C and C++98 (though it really is not all that compatible with C++98)
And, being optional, you are apt to take the short cut of not using it, which will bite you.
Rust is, more or less, C++17 with the automatic memory management layer being almost mandatory. Reply ↓
- jim on 2017-12-22 at 20:39:27 said:
> you are likely to find that more and more of your effort and defect chasing is related to the AMM layer
But the AMM layer for C++ has already been written and debugged, and standards and idioms exist for integrating it into your classes and type definitions.
Once built into your classes, you are then free to write code as if in a fully garbage collected language in which all types act like ints.
C++14, used correctly, is a metalanguage for writing domain specific languages.
Now sometimes building your classes in C++ is weird, nonobvious, and apt to break for reasons that are difficult to explain, but done correctly all the weird stuff is done once in a small number of places, not spread all over your code Reply ↓
- Dave taht on 2017-12-22 at 22:31:40 said: Linux is the best C library ever created. And it's often, terrifying. Things like RCU are nearly impossible for mortals to understand. Reply ↓
- Alex Beamish on 2017-12-23 at 11:18:48 said: Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago. Once I'd moved, I also appreciated the memory management in the shape of 'any size you want' arrays, hashes (where had they been all my life?) and autovivification -- on the spot creation of array or hash elements, at any depth.
While C is a low-level language that masquerades as a high-level language, the original intent of the language was to make writing assembler easier and faster. It can still be used for that, when necessary, leaving the more complicated matters to higher level languages. Reply ↓
- esr on 2017-12-23 at 14:36:26 said: >Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago.
Prestty much all that goodness depends on AMM and could not be implemented without it. Reply ↓
- jim on 2017-12-23 at 22:17:39 said: Autovivification saves you much effort, thought, and coding, because most of the time the perl interpreter correctly divines your intention, and does a pile of stuff for you, without you needing to think about it.
And then it turns around and bites you because it does things for you that you did not intend or expect.
The larger the program, and the longer you are keeping the program around, the more it is a problem. If you are writing a quick one off script to solve some specific problem, you are the only person who is going to use the script, and are then going to throw the script away, fine. If you are writing a big program that will be used by lots of people for a long time, autovivification, is going to turn around and bit you hard, as are lots of similar perl features where perl makes life easy for you by doing stuff automagically.
With the result that there are in practice very few big perl programs used by lots of people for a long time, while there are an immense number of very big C and C++ programs used by lots of people for a very long time.
On esr's argument, we should never be writing big programs in C any more, and yet, we are.
I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.
This, my personal experience, is the exact opposite of the outcome claimed by esr.
Well, that was perl, Microsoft Visual Basic, and PHP. Maybe Java scales better.
But perl, Microsoft visual basic, and PHP did not scale. Reply ↓
- esr on 2017-12-23 at 22:41:15 said: >But perl, Microsoft visual basic, and PHP did not scale.
Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.
Now I understand your fondness for C++ better. It's bad, but those are way worse at any large scale. AMM isn't enough to keep you out of trouble if the rest of the language is a tar-pit. Those three are full of the bones of drowned devops victims.
Yes, Java scales better. CPython would too from a pure maintainability standpoint, but it's too slow for the kind of deployment you're implying – on the other hand, PyPy might not be, I'm finding the JIT compilation works extremely well and I get runtimes I think are within 2x or 3x of C. Go would probably be da bomb. Reply ↓
- esr on 2017-12-23 at 23:35:29 said: I wrote:
>All three of those languages are notorious sinkholes
You know when you're in deep shit? You're in deep shit when your figure of merit is long-term maintainability and Perl is the least bad alternative.
*shudder* Reply ↓
- Jeff Read on 2017-12-24 at 02:56:28 said:
Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.
Can confirm -- Visual Basic (6 and VBA) is a toilet. An absolute cesspool. It's full of little gotchas -- such as non-short-circuiting AND and OR operators (there are no differentiated bitwise/logical operators) and the cryptic Dir() function that exactly mimics the broken semantics of MS-DOS's directory-walking system call -- that betray its origins as an extended version of Microsoft's 8-bit BASIC interpreter (the same one used to write toy programs on TRS-80s and Commodores from a bygone era), and prevent you from writing programs in a way that feels natural and correct if you've been exposed to nearly anything else.
VB is a language optimized to a particular workflow -- and like many languages so optimized as long as you color within the lines provided by the vendor you're fine, but it's a minefield when you need to step outside those lines (which happens sooner than you may think). And that's the case with just about every all-in-one silver-bullet "solution" I've seen -- Rails and PHP belong in this category too.
It's no wonder the cuddly new Microsoft under Nadella is considering making Python a first-class extension language for Excel (and perhaps other Office apps as well).
Visual Basic .NET is something quite different -- a sort of Microsoft-flavored Object Pascal, really. But I don't know of too many shops actually using it; if you're targeting the .NET runtime it makes just as much sense to just use C#.
As for Perl, it's possible to write large, readable, maintainable code bases in object-oriented Perl. I've seen it done. BUT -- you have to be careful. You have to establish coding standards, and if you come across the stereotype of "typical, looks-like-line-noise Perl code" then you have to flunk it at code review and never let it touch prod. (Do modern developers even know what line noise is, or where it comes from?) You also have to choose your libraries carefully, ensuring they follow a sane semantics that doesn't require weirdness in your code. I'd much rather just do it in Python. Reply ↓
- TheDividualist on 2017-12-27 at 11:24:59 said: VB.NET is unusued in the kind of circles *you know* because these are competitive and status-conscious circles and anything with BASIC in the name is so obviously low-status and just looks so bad on the resume that it makes sense to add that 10-20% more effort and learn C#. C# sounds a whole lot more high status, as it has C in the name so obvious it looks like being a Real Programmer on the resume.
What you don't know is what happens outside the circles where professional programmers compete for status and jobs.
I can report that there are many "IT guys" who are not in these circles, they don't have the intra-programmer social life hence no status concerns, nor do they ever intend apply for Real Programmer jobs. They are just rural or not first worlder guys who grew up liking computers, and took a generic "IT guy" job at some business in a small town and there they taught themselves Excel VBscript when the need arised to automate some reports, and then VB.NET when it was time to try to build some actual application for in-house use. They like it because it looks less intimidating – it sends out those "not only meant for Real Programmers" vibes.
I wish we lived in a world where Python would fill that non-intimidating amateur-friendly niche, as it could do that job very well, but we are already on a hell of a path dependence. Seriously, Bill Gates and Joel Spolsky got it seriously right when they made Excel scriptable. The trick is how to provide a smooth transition between non-programming and programming.
One classic way is that you are a sysadmin, you use the shell, then you automate tasks with shell scripts, then you graduate to Perl.
One, relatively new way is that you are a web designer, write HTML and CSS, and then slowly you get dragged, kicking and screaming into JavaScript and PHP.
The genius was that they realized that a spreadsheet is basically modern paper. It is the most basic and universal tool of the office drone. I print all my automatically generated reports into xlsx files, simply because for me it is the "paper" of 2017, you can view it on any Android phone, and unlike PDF and like paper you can interact and work with the figures, like add other numbers to them.
So it was automating the spreadsheet, the VBScript Excel macro that led the way from not-programming to programming for an immense number of office drones, who are far more numerous than sysadmins and web designers.
Aaand I think it was precisely because of those microcomputers, like the Commodore. Out of every 100 office drone in 1991 or so, 1 or 2 had entertained themselves in 1987 typing in some BASIC programs published in computer mags. So when they were told Excel is programmable with a form of BASIC they were not too intidimated
This created such a giant path dependency that still if you want to sell a language to millions and millions of not-Real Programmers you have to at least make it look somewhat like Basic.
I think from this angle it was a masterwork of creating and exploiting path dependency. Put BASIC on microcomputers. Have a lot of hobbyists learn it for fun. Create the most universal office tool. Let it be programmable in a form of BASIC – you can just work on the screen, let it generate a macro and then you just have to modify it. Mostly copy-pasting, not real programming. But you slowly pick up some programming idioms. Then the path curves up to VB and then VB.NET.
To challenge it all, one needs to find an application area as important as number cruching and reporting in an office: Excel is basically electronic paper from this angle and it is hard to come up with something like this. All our nearly computer illiterate salespeople use it. (90% of the use beyond just typing data in a grid is using the auto sum function.) And they don't use much else than that and Word and Outlook and chat apps.
Anyway suppose such a purpose can be found, then you can make it scriptable in Python and it is also important to be able to record a macro so that people can learn from the generated code. Then maybe that dominance can be challenged. Reply ↓
- Jeff Read on 2018-01-18 at 12:00:29 said: TIOBE says that while VB.NET saw an uptick in popularity in 2011, it's on its way down now and usage was moribund before then.
In your attempt to reframe my statements in your usual reference frame of Academic Programmer Bourgeoisie vs. Office Drone Proletariat, you missed my point entirely: VB.NET struggled to get a foothold during the time when VB6 was fresh in developers' minds. It was too different (and too C#-like) to win over VB6 devs, and didn't offer enough value-add beyond C# to win over the people who would've just used C# or Java. Reply ↓
- jim of jim's blog on 2018-02-10 at 19:10:17 said: Yes, but he has point.
App -> macros -> macro script-> interpreted language with automatic memory management.
So you tend to wind up with a widely used language that was not so much designed, as accreted.
And, of course, programs written in this language fail to scale. Reply ↓
- Jeff Read on 2017-12-24 at 02:30:27 said:
I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.
Sounds just like every Ruby on Fails deployment I've ever seen. It's great when you're slapping together Version 0.1 of a product or so I've heard. But I've never joined a Fails team on version 0.1. The ones I saw were already well-established, and between the PFM in Rails itself, and the amount of monkeypatching done to system classes, it's very, very hard to reason about the code you're looking at. From a management level, you're asking for enormous pain trying to onboard new developers into that sort of environment, or even expand the scope of your product with an existing team, without them tripping all over each other.
There's a reason why Twitter switched from Rails to Scala. Reply ↓
- jim on 2017-12-27 at 03:53:42 said: Jeff Read wrote:
> Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada , hacker culture has fallen far short. Now that's changing, has to change.|
Observe that "has to change" in practice means that the social justice warriors take charge.
Observe that in practice, when the social justice warriors take charge, old bugs don't get fixed, new bugs appear, and projects turn into aimless garbage, if any development occurs at all.
"has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.
Reflect on the attempted suicide of "Coraline" It is not people like me who keep using the correct pronouns that caused "her" to attempt suicide. It is the people who used "her" to grab power. Reply ↓
- esr on 2017-12-27 at 14:30:33 said: >"has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.
It's never happened before, and may very well never happen again but this once I completely agree with JAD. The "change" the SJWs actually want – as opposed to what they claim to want – would ruin us. Reply ↓
- jim on 2017-12-27 at 19:42:36 said: To get back on topic:
Modern, mostly memory safe C++, is enforced by:
https://blogs.msdn.microsoft.com/vcblog/2016/03/31/c-core-guidelines-checkers-preview-of-the-lifetime-safety-checker/
http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-abstract
http://clang.llvm.org/extra/clang-tidy/$ clang-tidy test.cpp -checks=clang-analyzer-cplusplus*, cppcoreguidelines-*, modernize-*
cppcoreguidelines-* and modernize-* will catch most of the issues that esr complains about, in practice usually all of them, though I suppose that as the project gets bigger, some will slip through.
Remember that gcc and g++ is C++98 by default, because of the vast base of old fashioned C++ code which is subtly incompatible with C++11, C++11 onwards being the version of C++ that optionally supports memory safety, hence necessarily subtly incompatible.
To turn on C++11
Place
cmake_minimum_required(VERSION 3.5)
# set standard required to ensure that you get
# the same version of C++ on every platform
# as some environments default to older dialects
# of C++ and some do not.
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)in your CMakeLists.txt Reply ↓
- Paul R on 2017-12-28 at 10:26:44 said: The g++ default, as of now, see 2.2 at https://gcc.gnu.org/onlinedocs/gcc/Standards.html is '-std=gnu++14' which is c++14 and some GNU extras.
Which is good I think MSVC is similar, you have to specifically ask for C++17. Reply ↓
- h0bby1 on 2018-05-19 at 09:27:02 said: I think i solved lots of those issues in C with a runtime i made.
Originally i made this system because i wanted to test programming a micro kernel OS, with protected mode, PCI bus, usb, ACPI etc, and i didn't want to get close to the 'event horizon' of memory mannagement in C.
But i didn't wait the Greenspun law to kick in, so i first developped a safe memory system as a runtime, and replaced the standard C runtime and memory mannagement with it.
I wanted zero seg fault or memory error possible at all anywhere in the C code. Because debuguing bare metal exception, without debugger, with complex data structures made in C look very close to the black hole.
I didn't want to use C++ because C++ compiler have very unpredictible binary format and function name decoration, which make it much harder to interface with at kernel level.
I wanted also some system as efficient as possible to mannage lockless shared access between thread of the whole memory as much as possible, to avoid the 'exclusive borrow' syndrome of rust, with global variables shared between threads with lockless algorithm to access them.
I took inspiration from the algorithm on this site http://www.1024cores.net/ to develop the basic system, with strong references as the norm, and direct 'bare pointer' only as weak references for fast access to memory in C.
What i ended doing is basically a 'strongly typed hashmap DAG' to store object references hierarchy, which can be manipulated using 'lambda expressions', in sort that applications can manipulate objects in a indirect manner only through the DAG abstraction, without having to manipulate bare pointers at all.
This also make a mark and sweep garbage collector easier to do, especially with an 'event based' system, the main loop can call the garbage collector between two executions of event/messages handlers, which has the advantage that it can be made at a point where there is no application data on the stack to mark, so it avoid mistaking application data in the stack for a pointer. All references that are only in stack variables can get automatically garbage collected when the function exit, much like in C++ actually.
The garbage collector can still be called by the allocator when there is OOM error, it will attempt a garbage collection before failing the allocation, but all references in the stack should be garbage collected when the function return to the main loop and the garbage collector is run.
As all the references hierarchy is expressed explicity in the DAG, there shouldn't be any pointer stored in the heap, outside of the module's data section, which correspond to C global variables that are used as the 'root element' of object hierarchy, which can be traversed to find all the actives references to heap data that the code can potentially use. A quick system could be made for that the compiler can automatically generate a list of the 'root references' in the global variables, to avoid memory leak if some global data can look like a reference.
As each thread have their own heap, it also avoid the 'stop the world syndrome', all threads can garbage collect their own heap, and there is already some system of lockless synchronisation to access references based on expression in the DAG, to avoid having to rely only on 'bare pointers' to manipulate object hierarchy, which allow dynamic relocation, and make it easier to track active references.
It's also very useful to track memory leak, as the allocator can keep the time of each memory allocation, it's easy to see all the allocations that happenned between two points of the program, and dump all their hierarchy and property only from the 'bare reference'.
Each thread contain two heaps, one which is manually mannaged, mostly used for temporary strings , or IO buffers, and the other heap which can be mannaged either with atomic reference count, or mark and sweep.
With this system, C program rarely have to use directly malloc/free, nor to manipulate pointers to allocated memory directly, other than for temporary buffer allocation, like a dynamic stack, for io buffers or temporary strings who can easily be mannaged manually. And all the memory manipulation can be made via a runtime which keep track internally of pointer address and size, data type and eventually a 'finalizer' function that will be callled when the pointer is freed,
Since i started to use this system to make C programs, alongside with my own ABI which can dynamically link binaries compiled with visual studio and gcc together, i tested it for many different use case, i could make a mini multi thread window mannager/UI, with aysnc irq driven HID driver events, and a system of distributed application based on blockchain data, which include multi thread http server who can handle parrallel json/rpc calls, with an abstraction of applications stack via custom data type definition / scripts stored on the blockchain, and i have very little problem of memory, albeit it's 100% in C, multi threaded and deal with heavily dynamic data.
With the mark and sweep mode, it can become quite easy to develop multi thread applications with good level of concurrency, even to do simple database system, driven by a script over asynch http/json/rpc, without having to care about complex memory mannagement.
Even with the reference count mode, the manipulation of references is explicit, and it should not be to hard to detect leaks with simple parsers, i already did test with antlr C parser, with a visitor class to parse the grammar and detect potentially errors, as all memory referencing happen through specific type instead of bare pointers, it's not too hard to detect potential memory leak problem with a simple parser. Reply ↓
- Arron Grier on 2018-06-07 at 17:37:17 said: Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?
Just wondering Reply ↓
- esr on 2018-06-08 at 05:48:37 said: >Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?
Too soon. Go is very interesting but it's not an essential tool yet.
That might change in a few years. Reply ↓
- Yankes on 2018-12-18 at 19:20:46 said: I have one question, do you even need global AMM? Get one of element of graph, when it will/should be released in your reposugeon? Over all I think it is never because it usually link with other from this graph. Overall do you check how many objects are created and released during operations? I do not mean some temporal strings but object representing main working set.
Depending on answer it could be if you load some graph element and it will stay indefinitely in memory then this could easy be converted to C/C++ by simply never using `free` for graph elements (and all problems with memory management goes out of the windows).
If they should be released early then when it should happened? Do you have some code in reposurgeon that purge not needed objects when not needed any more? Depend on simply accessibility of some object do not mean it needed, many times is quite opposite.I now working on C# application that had similar bungle like this and previous developers "solution" was to restarting whole application instead of fixing lifetime problems. Correct solution was C++ like code, I create object, do work and purge it explicitly. With this non of components have memory issues now. Of corse problem there lay with lack of knowing tools they use and not complexity of domain, but did you do analysis what is needed and what not, and how long? AMM do not solve this.
btw I big fan of lisp that is in C++11 aka templates, great pure functional language :D Reply ↓
- esr on 2018-12-18 at 20:57:12 said: >I have one question, do you even need global AMM?
Oh hell yes. Consider, for example, the demands of loading in ad operating on multiple repositories. Reply ↓
- Yankes on 2018-12-19 at 08:56:36 said: If I understood this correctly situations look like:
I have processes that loaded repo A, B and C and active working on each one.
Now because of some demand we need load repo D.
After we are done we back to A, B and C.
Now question is should be D data be purged?
If there are memory connection form previous repos then it will stay in memory if not then AMM will remove all data from memory.
If this is complex graph when you have access to any element the you can crawl to any other element of this graph (this is simplification but probably safe assumption).
In first case (there is connection) is equivalent to not using `free` in C. Of corse if not all graph is reachable then there will be partial purge of it memory (let say that 10% will stay), but what will happens when you need again load repo D? Current data avaialbe is hidden deep in other graphs and most of data is lost do AMM. you need load everything again and now repo D size is 110%.In case there is not connection between repos A, B, C and repo D then we can free it entirely.
This is easy done in C++ (some kind of smart pointer that know if it pointing same repo or other).Do my reasoning is correct? or I miss something?
btw there BIG difference between C and C++, I can implement things in C++ that I will NEVER be able to implement in C, example of this is my strong typed simple script language:
https://github.com/Yankes/OpenXcom/blob/master/src/Engine/Script.cpp
I would need drop functionalists/protections to be able to convert this to C (or even C++03).Another example of this is https://github.com/fmtlib/fmt from C++ and `printf` from C.
Both do exactly same but C++ is lot of times better and safer.This mean if we add your statement on impossibility and my then we have:
C <<< C++ <<< Go/Python
but for me personally is more:
C <<< C++ < Go/Python
than yours:
C/C++ <<< Go/Python Reply ↓
- esr on 2018-12-19 at 09:08:46 said: >Do my reasoning is correct? or I miss something?
Not much. The bigger issue is that it is fucking insane to try anything like this in a language where the core abstractions are leaky. That disqualifies C++. Reply ↓
- Yankes on 2018-12-19 at 10:24:47 said: I only disagree with word `insane`, C++ have lot of problems like UB, lot of corner cases, leaking abstraction, whole crap form C (and my favorite: 1000 line errors from templates), but is not insane to work with memory problems.
You can easy create tools that make all this problems bearable, and this is biggest flaw in C++, many problems are solvable but not out of box. C++ is good on crating abstraction:
https://www.youtube.com/watch?v=sPhpelUfu8Q
That will fit your domain then it will not leak much because it fit right the underling problem.
And you can enforce lot of things that allow you to reason locally about behavior of program.In case of creating this new abstraction is indeed insane then I think you have problems in Go too because only problem that AMM solve is reachability of memory and how long you need for it.
btw best thing that show difference between C++03 and C++11 is `std::vector<std::vector>`, in C++03 this is insane stupid and in C++11 is insane clever because it have performance characteristic of `std::vector` (thanks to `std::move`) and no problems with memory management (keep index stable and use `v.at(i).at(j).x = 5;` or warp it in helper class and use `v[i][j].x` that will throw on wrong index). Reply ↓
Your email address will not be published. Required fields are marked *
Tip Jar
Donate here to support my open-source projects. Small but continuing donations via Patreon help more than one-time donations via PayPal.
ArchivesMeta Hacker Emblem Eric Conspiracy Anti-Idiotarian Manifesto © 2019 Armed and Dangerous Admired Theme
- November 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- June 2006
- May 2006
- April 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- September 2005
- August 2005
- July 2005
- February 2005
- January 2005
- December 2004
- November 2004
- October 2004
- September 2004
- February 2004
- January 2004
- December 2003
- November 2003
- October 2003
- September 2003
- August 2003
- July 2003
- June 2003
- May 2003
- April 2003
- December 2002
- November 2002
- October 2002
- September 2002
- July 2002
- June 2002
- May 2002
Oct 15, 2019 | economistsview.typepad.com
From Reuters Odd News :
Man gets the poop on outsourcing , By Holly McKenna, May 2, Reuters
Computer programmer Steve Relles has the poop on what to do when your job is outsourced to India. Relles has spent the past year making his living scooping up dog droppings as the "Delmar Dog Butler." "My parents paid for me to get a (degree) in math and now I am a pooper scooper," "I can clean four to five yards in a hour if they are close together." Relles, who lost his computer programming job about three years ago ... has over 100 clients who pay $10 each for a once-a-week cleaning of their yard.
Relles competes for business with another local company called "Scoopy Do." Similar outfits have sprung up across America, including Petbutler.net, which operates in Ohio. Relles says his business is growing by word of mouth and that most of his clients are women who either don't have the time or desire to pick up the droppings. "St. Bernard (dogs) are my favorite customers since they poop in large piles which are easy to find," Relles said. "It sure beats computer programming because it's flexible, and I get to be outside,"
Oct 05, 2019 | www.reddit.com
They say, No more IT or system or server admins needed very soon...
Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes.
ntengineer 613 points · 4 days ago
Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.
For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:
"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "
Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.
I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2
PuzzledSwitch 101 points · 4 days ago
my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.
no amount of corporate pressuring or bitching could ever stand up to that. level 3
themastermatt 42 points · 4 days ago
Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4 PuzzledSwitch 35 points · 4 days ago
make a follow-up in email, then.
or, you might have to interject for a moment.
williamfny Jack of All Trades 26 points · 4 days ago
This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4
MaxHedrome 6 points · 4 days ago
CrazyTachikoma 4 days agoListen to this guy OP
This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.
Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences. Then when everything goes to shit, they call the infrastructure team to fix it...
Sep 18, 2019 | www.moonofalabama.org
A.L. , Sep 18 2019 19:56 utc | 31
@30 David Gperhaps, just like proponents of AI and self driving cars. They just love the technology, financially and emotionally invested in it so much they can't see the forest from the trees.
I like technology, I studied engineering. But the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.
engineering used to be a discipline with ethics and responsibilities... But now anybody who could write two lines of code can call themselves a software engineer....
Sep 14, 2019 | www.nakedcapitalism.com
Wukchumni , September 13, 2019 at 4:29 pm
Re: Fake list of grunge slang:
a fabulous tale of the South Pacific by William Manchester
The Man Who Could Speak Japanese
"We wrote it down.
The next phrase was:
" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "
Then:
" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "
"Tong what ?" rasped the colonel.
"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?"
Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction.
https://www.americanheritage.com/man-who-could-speak-japanese
Sep 07, 2019 | archive.computerhistory.org
Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.
I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.
We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.
He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.
So, programming.
I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.
Sep 07, 2019 | archive.computerhistory.org
So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.
The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."
At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.
Sep 07, 2019 | conservancy.umn.edu
Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....
Knuth: That is different?
Frana: What are the characteristics?
Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.
Frana: It is the hardest thing to really understand that which you are existing within.
Knuth: Yes.
Sep 07, 2019 | conservancy.umn.edu
Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.
But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.
When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...
Sep 06, 2019 | archive.computerhistory.org
...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."
Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.
He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.
When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"
It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."
If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.
I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.
Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important
Sep 06, 2019 | conservancy.umn.edu
Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.
Sep 06, 2019 | archive.computerhistory.org
Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.
We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."
The more I thought about it, I decided "Oh yes, I've got this book inside of me."
I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."
As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.
Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.
Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.
Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.
Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.
Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.
Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.
In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.
I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.
So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.
But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.
I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.
Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.
So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.
It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of
Aug 31, 2019 | developers.slashdot.org
Anonymous Coward , Friday February 22, 2019 @02:42PM ( #58165060 )Seven Spirals ( 4924941 ) , Friday February 22, 2019 @02:57PM ( #58165150 )Algorithms, not code ( Score: 4 , Insightful)Sad to see these are all books about coding and coding style. Nothing at all here about algorithms, or data structures.
My vote goes for Algorithms by Sedgewick
MOTIF Programming by Marshall Brain ( Score: 3 )Seven Spirals ( 4924941 ) writes:Amazing how little memory and CPU MOTIF applications take. Once you get over the callbacks, it's actually not bad!
Re: ( Score: 2 )SuperKendall ( 25149 ) writes:Interesting. Sorry you had that experience. I'm not sure what you mean by a "multi-line text widget". I can tell you that early versions of OpenMOTIF were very very buggy in my experience. You probably know this, but after OpenMOTIF was completed and revved a few times the original MOTIF code was released as open-source. Many of the bugs I'd been seeing (and some just strange visual artifacts) disappeared. I know a lot of people love QT and it's produced real apps and real results - I won't poo-poo it. How
Design and Evolution of C++ ( Score: 2 )shanen ( 462549 ) writes:Even if you don't like C++ much, The Design and Evolution of C++ [amazon.com] is a great book for understanding why pretty much any language ends up the way it does, seeing the tradeoffs and how a language comes to grow and expand from simple roots. It's way more interesting to read than you might expect (not very dry, and more about human interaction than you would expect).
Other than that reading through back posts in a lot of coding blogs that have been around a long time is probably a really good idea.
Also a side re
What about books that hadn't been written yet? ( Score: 2 )shanen ( 462549 ) writes:You young whippersnappers don't 'preciate how good you have it!
Back in my day, the only book about programming was the 1401 assembly language manual!
But seriously, folks, it's pretty clear we still don't know shite about how to program properly. We have some fairly clear success criteria for improving the hardware, but the criteria for good software are clear as mud, and the criteria for ways to produce good software are much muddier than that.
Having said that, I will now peruse the thread rather carefully
TMI, especially PII ( Score: 2 )UnknownSoldier ( 67820 ) , Friday February 22, 2019 @03:52PM ( #58165532 )Couldn't find any mention of Guy Steele, so I'll throw in The New Hacker's Dictionary , which I once owned in dead tree form. Not sure if Version 4.4.7 http://catb.org/jargon/html/ [catb.org] is the latest online... Also remember a couple of his language manuals. Probably used the Common Lisp one the most...
Didn't find any mention of a lot of books that I consider highly relevant, but that may reflect my personal bias towards history. Not really relevant for most programmers.
TMI, but if I open up my database on all t
Programming is about **Effective Communication** ( Score: 5 , Insightful)I've been programming for the past ~40 years and I'll try to summarize what I believe are the most important bits about programming (pardon the pun.) Think of this as a META: " HOWTO: Be A Great Programmer " summary. (I'll get to the books section in a bit.)
1. All code can be summarized as a trinity of 3 fundamental concepts:
* Linear ; that is, sequence: A, B, C
* Cyclic ; that is, unconditional jumps: A-B-C-goto B
* Choice ; that is, conditional jumps: if A then B2. ~80% of programming is NOT about code; it is about Effective Communication. Whether that be:
* with your compiler / interpreter / REPL
* with other code (levels of abstraction, level of coupling, separation of concerns, etc.)
* with your boss(es) / manager(s)
* with your colleagues
* with your legal team
* with your QA dept
* with your customer(s)
* with the general publicThe other ~20% is effective time management and design. A good programmer knows how to budget their time. Programming is about balancing the three conflicting goals of the Program Management Triangle [wikipedia.org]: You can have it on time, on budget, on quality. Pick two.
3. Stages of a Programmer
There are two old jokes:
In Lisp all code is data. In Haskell all data is code.And:
Progression of a (Lisp) Programmer:* The newbie realizes that the difference between code and data is trivial.
* The expert realizes that all code is data.
* The true master realizes that all data is code.(Attributed to Aristotle Pagaltzis)
The point of these jokes is that as you work with systems you start to realize that a data-driven process can often greatly simplify things.
4. Know Thy Data
Fred Books once wrote
"Show me your flowcharts (source code), and conceal your tables (domain model), and I shall continue to be mystified; show me your tables (domain model) and I won't usually need your flowcharts (source code): they'll be obvious."A more modern version would read like this:
Show me your code and I'll have to see your data,
Show me your data and I won't have to see your code.The importance of data can't be understated:
* Optimization STARTS with understanding HOW the data is being generated and used, NOT the code as has been traditionally taught.
* Post 2000 "Big Data" has been called the new oil. We are generating upwards to millions of GB of data every second. Analyzing that data is import to spot trends and potential problems.5. There are three levels of optimizations. From slowest to fastest run-time:
a) Bit-twiddling hacks [stanford.edu]
b) Algorithmic -- Algorithmic complexity or Analysis of algorithms [wikipedia.org] (such as Big-O notation)
c) Data-Orientated Design [dataorienteddesign.com] -- Understanding how hardware caches such as instruction and data caches matter. Optimize for the common case, NOT the single case that OOP tends to favor.Optimizing is understanding Bang-for-the-Buck. 80% of code execution is spent in 20% of the time. Speeding up hot-spots with bit twiddling won't be as effective as using a more efficient algorithm which, in turn, won't be as efficient as understanding HOW the data is manipulated in the first place.
6. Fundamental Reading
Since the OP specifically asked about books -- there are lots of great ones. The ones that have impressed me that I would mark as "required" reading:
* The Mythical Man-Month
* Godel, Escher, Bach
* Knuth: The Art of Computer Programming
* The Pragmatic Programmer
* Zero Bugs and Program Faster
* Writing Solid Code / Code Complete by Steve McConnell
* Game Programming Patterns [gameprogra...tterns.com] (*)
* Game Engine Design
* Thinking in Java by Bruce Eckel
* Puzzles for Hackers by Ivan Sklyarov(*) I did NOT list Design Patterns: Elements of Reusable Object-Oriented Software as that leads to typical, bloated, over-engineered crap. The main problem with "Design Patterns" is that a programmer will often get locked into a mindset of seeing everything as a pattern -- even when a simple few lines of code would solve th eproblem. For example here is 1,100+ of Crap++ code such as Boost's over-engineered CRC code [boost.org] when a mere ~25 lines of SIMPLE C code would have done the trick. When was the last time you ACTUALLY needed to _modify_ a CRC function? The BIG picture is that you are probably looking for a BETTER HASHING function with less collisions. You probably would be better off using a DIFFERENT algorithm such as SHA-2, etc.
7. Do NOT copy-pasta
Roughly 80% of bugs creep in because someone blindly copied-pasted without thinking. Type out ALL code so you actually THINK about what you are writing.
8. K.I.S.S.
Over-engineering and aka technical debt, will be your Achilles' heel. Keep It Simple, Silly.
9. Use DESCRIPTIVE variable names
You spend ~80% of your time READING code, and only ~20% writing it. Use good, descriptive variable names. Far too programmers write usless comments and don't understand the difference between code and comments:
Code says HOW, Comments say WHYA crap comment will say something like:
// increment i No, Shit Sherlock! Don't comment the obvious!
A good comment will say something like:
// BUGFIX: 1234: Work-around issues caused by A, B, and C. 10. Ignoring Memory Management doesn't make it go away -- now you have two problems. (With apologies to JWZ)
TINSTAAFL.
11. Learn Multi-Paradigm programming [wikipedia.org].
If you don't understand both the pros and cons of these programming paradigms
... * Procedural
* Object-Orientated
* Functional, and
* Data-Orientated Design
... then you will never really understand programming, nor abstraction, at a deep level, along with how and when it should and shouldn't be used. 12. Multi-disciplinary POV
ALL non-trivial code has bugs. If you aren't using static code analysis [wikipedia.org] then you are not catching as many bugs as the people who are.
Also, a good programmer looks at his code from many different angles. As a programmer you must put on many different hats to find them:
* Architect -- design the code
* Engineer / Construction Worker -- implement the code
* Tester -- test the code
* Consumer -- doesn't see the code, only sees the results. Does it even work?? Did you VERIFY it did BEFORE you checked your code into version control?13. Learn multiple Programming Languages
Each language was designed to solve certain problems. Learning different languages, even ones you hate, will expose you to different concepts. e.g. If you don't how how to read assembly language AND your high level language then you will never be as good as the programmer who does both.
14. Respect your Colleagues' and Consumers Time, Space, and Money.
Mobile game are the WORST at respecting people's time, space and money turning "players into payers." They treat customers as whales. Don't do this. A practical example: If you are a slack channel with 50+ people do NOT use @here. YOUR fire is not their emergency!
15. Be Passionate
If you aren't passionate about programming, that is, you are only doing it for the money, it will show. Take some pride in doing a GOOD job.
16. Perfect Practice Makes Perfect.
If you aren't programming every day you will never be as good as someone who is. Programming is about solving interesting problems. Practice solving puzzles to develop your intuition and lateral thinking. The more you practice the better you get.
"Sorry" for the book but I felt it was important to summarize the "essentials" of programming.
--
Hey Slashdot. Fix your shitty filter so long lists can be posted.: "Your comment has too few characters per line (currently 37.0)."raymorris ( 2726007 ) , Friday February 22, 2019 @05:39PM ( #58166230 ) JournalShared this with my team ( Score: 4 , Insightful)You crammed a lot of good ideas into a short post.
I'm sending my team at work a link to your post.You mentioned code can data. Linus Torvalds had this to say:
"I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful [â¦] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."
"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
I'm inclined to agree. Once the data structure is right, the code oftem almost writes itself. It'll be easy to write and easy to read because it's obvious how one would handle data structured in that elegant way.
Writing the code necessary to transform the data from the input format into the right structure can be non-obvious, but it's normally worth it.
Aug 31, 2019 | ask.slashdot.org
GreatDrok ( 684119 ) , Saturday June 04, 2016 @10:03PM ( #52250917 ) JournalProgramming, not coding ( Score: 5 , Interesting)i learnt to program at school from a Ph.D computer scientist. We never even had computers in the class. We learnt to break the problem down into sections using flowcharts or pseudo-code and then we would translate that program into whatever coding language we were using. I still do this usually in my notebook where I figure out all the things I need to do and then write the skeleton of the code using a series of comments for what each section of my program and then I fill in the code for each section. It is a combination of top down and bottom up programming, writing routines that can be independently tested and validated.
Nov 05, 2018 | www.amazon.com
Elegance is one of those things that can be difficult to define. I know it when I see it, but putting what I see into a terse definition is a challenge. Using the Linux diet
command, Wordnet provides one definition of elegance as, "a quality of neatness and ingenious simplicity in the solution of a problem (especially in science or mathematics); 'the simplicity and elegance of his invention.'"In the context of this book, I think that elegance is a state of beauty and simplicity in the design and working of both hardware and software. When a design is elegant,
software and hardware work better and are more efficient. The user is aided by simple, efficient, and understandable tools.Creating elegance in a technological environment is hard. It is also necessary. Elegant solutions produce elegant results and are easy to maintain and fix. Elegance does not happen by accident; you must work for it.
The quality of simplicity is a large part of technical elegance. So large, in fact that it deserves a chapter of its own, Chapter 18, "Find the Simplicity," but we do not ignore it here. This chapter discusses what it means for hardware and software to be elegant.
Yes, hardware can be elegant -- even beautiful, pleasing to the eye. Hardware that is well designed is more reliable as well. Elegant hardware solutions improve reliability'.
Sep 21, 2018 | tech.slashdot.org
Nikita Prokopov, a software programmer and author of Fira Code, a popular programming font, AnyBar, a universal status indicator, and some open-source Clojure libraries, writes :
Remember times when an OS, apps and all your data fit on a floppy? Your desktop todo app is probably written in Electron and thus has userland driver for Xbox 360 controller in it, can render 3d graphics and play audio and take photos with your web camera. A simple text chat is notorious for its load speed and memory consumption. Yes, you really have to count Slack in as a resource-heavy application. I mean, chatroom and barebones text editor, those are supposed to be two of the less demanding apps in the whole world. Welcome to 2018.
At least it works, you might say. Well, bigger doesn't imply better. Bigger means someone has lost control. Bigger means we don't know what's going on. Bigger means complexity tax, performance tax, reliability tax. This is not the norm and should not become the norm . Overweight apps should mean a red flag. They should mean run away scared. 16Gb Android phone was perfectly fine 3 years ago. Today with Android 8.1 it's barely usable because each app has become at least twice as big for no apparent reason. There are no additional functions. They are not faster or more optimized. They don't look different. They just...grow?
iPhone 4s was released with iOS 5, but can barely run iOS 9. And it's not because iOS 9 is that much superior -- it's basically the same. But their new hardware is faster, so they made software slower. Don't worry -- you got exciting new capabilities like...running the same apps with the same speed! I dunno. [...] Nobody understands anything at this point. Neither they want to. We just throw barely baked shit out there, hope for the best and call it "startup wisdom." Web pages ask you to refresh if anything goes wrong. Who has time to figure out what happened? Any web app produces a constant stream of "random" JS errors in the wild, even on compatible browsers.
[...] It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs. Build systems are inherently unreliable and periodically require full clean, even though all info for invalidation is there. Nothing stops us from making build process reliable, predictable and 100% reproducible. Just nobody thinks its important. NPM has stayed in "sometimes works" state for years.
K. S. Kyosuke ( 729550 ) , Friday September 21, 2018 @11:32AM ( #57354556 )K. S. Kyosuke ( 729550 ) writes: on Friday September 21, 2018 @11:58AM ( #57354754 )Re:Why should they? ( Score: 4 , Insightful)Less resource use to accomplish the required tasks? Both in manufacturing (more chips from the same amount of manufacturing input) and in operation (less power used)?
Re:Why should they? ( Score: 2 )DontBeAMoran ( 4843879 ) writes: on Friday September 21, 2018 @12:04PM ( #57354826 )Ehm...so for example using smaller cars with better mileage to commute isn't more environmentally friendly either, according to you?https://slashdot.org/comments.pl?sid=12644750&cid=57354556#
Re:Why should they? ( Score: 2 )iPhone 4S used to be the best and could run all the applications.
Today, the same power is not sufficient because of software bloat. So you could say that all the iPhones since the iPhone 4S are devices that were created and then dumped for no reason.
It doesn't matter since we can't change the past and it doesn't matter much since improvements are slowing down so people are changing their phones less often.
Mark of the North ( 19760 ) , Friday September 21, 2018 @01:02PM ( #57355296 )Re:Why should they? ( Score: 5 , Interesting)Can you really not see the connection between inefficient software and environmental harm? All those computers running code that uses four times as much data, and four times the number crunching, as is reasonable? That excess RAM and storage has to be built as well as powered along with the CPU. Those material and electrical resources have to come from somewhere.
But the calculus changes completely when the software manufacturer hosts the software (or pays for the hosting) for their customers. Our projected AWS bill motivated our management to let me write the sort of efficient code I've been trained to write. After two years of maintaining some pretty horrible legacy code, it is a welcome change.
The big players care a great deal about efficiency when they can't outsource inefficiency to the user's computing resources.
eth1 ( 94901 ) , Friday September 21, 2018 @11:45AM ( #57354656 )Re:Why should they? ( Score: 5 , Informative)We've been trained to be a consuming society of disposable goods. The latest and greatest feature will always be more important than something that is reliable and durable for the long haul.It's not just consumer stuff.
The network team I'm a part of has been dealing with more and more frequent outages, 90% of which are due to bugs in software running our devices. These aren't fly-by-night vendors either, they're the "no one ever got fired for buying X" ones like Cisco, F5, Palo Alto, EMC, etc.
10 years ago, outages were 10% bugs, and 90% human error, now it seems to be the other way around. Everyone's chasing features, because that's what sells, so there's no time for efficiency/stability/security any more.
LucasBC ( 1138637 ) , Friday September 21, 2018 @12:05PM ( #57354836 )arglebargle_xiv ( 2212710 ) writes:Re:Why should they? ( Score: 3 , Interesting)Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software. This, in turn, leads to people having to replace computers that are otherwise working well, solely for the reason to keep up with software that requires more and more system resources for no tangible benefit. In a nutshell -- sloppy, lazy programming leads to more technology waste. That impacts the environment. I have a unique perspective in this topic. I do web development for a company that does electronics recycling. I have suffered the continued bloat in software in the tools I use (most egregiously, Adobe), and I see the impact of technological waste in the increasing amount of electronics recycling that is occurring. Ironically, I'm working at home today because my computer at the office kept stalling every time I had Photoshop and Illustrator open at the same time. A few years ago that wasn't a problem.
Re: ( Score: 3 )commodore64_love ( 1445365 ) , Friday September 21, 2018 @03:58PM ( #57356680 ) JournalThere is one place where people still produce stuff like the OP wants, and that's embedded. Not IoT wank, but real embedded, running on CPUs clocked at tens of MHz with RAM in two-digit kilobyte (not megabyte or gigabyte) quantities. And a lot of that stuff is written to very exacting standards, particularly where something like realtime control and/or safety is involved.
The one problem in this area is the endless battle with standards morons who begin each standard with an implicit "assume an infinitely
Re:Why should they? ( Score: 3 )> Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software.
Not just computers.
You can add Smart TVs, settop internet boxes, Kindles, tablets, et cetera that must be thrown-away when they become too old (say 5 years) to run the latest bloatware. Software non-engineering is causing a lot of working hardware to be landfilled, and for no good reason.
Sep 21, 2018 | tech.slashdot.org
JoeDuncan ( 874519 ) , Friday September 21, 2018 @12:58PM ( #57355276 )Obligatory ( Score: 2 )Fast, cheap (efficient) and reliable (robust, long lasting): pick 2.
roc97007 ( 608802 ) , Friday September 21, 2018 @12:16PM ( #57354946 ) JournalRe:Bloat = growth ( Score: 2 )There's probably some truth to that. And it's a sad commentary on the industry.
Sep 21, 2018 | tech.slashdot.org
Anonymous Coward , Friday September 21, 2018 @11:26AM ( #57354512 )Moore's law ( Score: 5 , Interesting)When the speed of your processor doubles every two year along with a concurrent doubling of RAM and disk space, then you can get away with bloatware.
Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.
Aug 01, 2018 | turcopolier.typepad.com
kao_hsien_chih Bill Herschel • a day ago ,Very, very slightly off-topic.
Much has been made, including in this post, of the excellent organization of Russian forces and Russian military technology.
I have been re-investigating an open-source relational database system known as PosgreSQL (variously), and I remember finding perhaps a decade ago a very useful whole text search feature of this system which I vaguely remember was written by a Russian and, for that reason, mildly distrusted by me.
Come to find out that the principle developers and maintainers of PostgreSQL are Russian. OMG. Double OMG, because the reason I chose it in the first place is that it is the best non-proprietary RDBS out there and today is supported on Google Cloud, AWS, etc.
The US has met an equal or conceivably a superior, case closed. Trump's thoroughly odd behavior with Putin is just one but a very obvious one example of this.
Of course, Trump's nationalistic blather is creating a "base" of people who believe in the godliness of the US. They are in for a very serious disappointment.
TTG kao_hsien_chih • a day ago ,After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and "light" manner that didn't demand too much of the hardware, if I remember correctly.
It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another.
Russia has plenty of very skilled, very well-trained folks and their science and math education is, in a way, more fundamentally and soundly grounded on the foundational stuff than US (based on my personal interactions anyways).
Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything.
FarNorthSolitude TTG • a day ago ,Well said. Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer.
It gave the students an understanding of how computers works down to the bit flipping level. Imagine trying to fuzz code in your head.
kao_hsien_chih FarNorthSolitude • 10 hours ago ,I recall flowcharting entirely on paper before committing a program to punched cards. I used to do hex and octal math in my head as part of debugging core dumps. Ah, the glory days.
Honeywell once made a military computer that was 10 bit. That stumped me for a while, as everything was 8 or 16 bit back then.
That used to be fairly common in the civilian sector (in US) too: computing time was expensive, so you had to make sure that the stuff worked flawlessly before it was committed.
No opportunity to seeing things go wrong and do things over like much of how things happen nowadays. Russians, with their hardware limitations/shortages, I imagine must have been much more thorough than US programmers were back in the old days, and you could only get there by being very thoroughly grounded n the basics.
Sep 07, 2018 | news.slashdot.org
If we take consulting, services, and support off the table as an option for high-growth revenue generation (the only thing VCs care about), we are left with open core [with some subset of features behind a paywall] , software as a service, or some blurring of the two... Everyone wants infrastructure software to be free and continuously developed by highly skilled professional developers (who in turn expect to make substantial salaries), but no one wants to pay for it. The economics of this situation are unsustainable and broken ...
[W]e now come to what I have recently called "loose" open core and SaaS. In the future, I believe the most successful OSS projects will be primarily monetized via this method. What is it? The idea behind "loose" open core and SaaS is that a popular OSS project can be developed as a completely community driven project (this avoids the conflicts of interest inherent in "pure" open core), while value added proprietary services and software can be sold in an ecosystem that forms around the OSS...
Unfortunately, there is an inflection point at which in some sense an OSS project becomes too popular for its own good, and outgrows its ability to generate enough revenue via either "pure" open core or services and support... [B]uilding a vibrant community and then enabling an ecosystem of "loose" open core and SaaS businesses on top appears to me to be the only viable path forward for modern VC-backed OSS startups.
Klein also suggests OSS foundations start providing fellowships to key maintainers, who currently "operate under an almost feudal system of patronage, hopping from company to company, trying to earn a living, keep the community vibrant, and all the while stay impartial...""[A]s an industry, we are going to have to come to terms with the economic reality: nothing is free, including OSS. If we want vibrant OSS projects maintained by engineers that are well compensated and not conflicted, we are going to have to decide that this is something worth paying for. In my opinion, fellowships provided by OSS foundations and funded by companies generating revenue off of the OSS is a great way to start down this path."
raymorris ( 2726007 ) , Sunday April 29, 2018 @02:05AM ( #56522343 ) JournalApr 30, 2018 | news.slashdot.org
Long-time Slashdot reader Martin S. pointed us to this an excerpt from the new book Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley by Portland-based investigator reporter Corey Pein.
The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other gratingly childish classifications describing various programming specialities.
" I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering talent.
I was struck by how many developers were, like myself, not really programmers , but rather this, that and the other. A great number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with "open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste...
[M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme for the transformation of society, always in a direction that favoured the investor and executive classes.
"I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily indoctrinated in a specious ideology."
Anonymous Coward , Saturday April 28, 2018 @11:40PM ( #56522045 )
older generations already had a term for this ( Score: 5 , Interesting)Older generations called this kind of fraud "fake it 'til you make it."
The people who are smarter won't ( Score: 5 , Informative)brian.stinar ( 1104135 ) writes:> The people can do both are smart enough to build their own company and compete with you.
Been there, done that. Learned a few lessons. Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring, managing people, corporate strategy, staying up on the competition, figuring out tax changes each year and getting taxes filed six times each year, the various state and local requirements, legal changes, contract hassles, etc, while hoping the company makes money this month so they can take a paycheck and lay their rent.
I learned that I'm good at creating software systems and I enjoy it. I don't enjoy all-nighters, partners being dickheads trying to pull out of a contract, or any of a thousand other things related to running a start-up business. I really enjoy a consistent, six-figure compensation package too.
Re: ( Score: 2 )Cederic ( 9623 ) writes:* getting taxes filled eighteen times a year.
I pay monthly gross receipts tax (12), quarterly withholdings (4) and a corporate (1) and individual (1) returns. The gross receipts can vary based on the state, so I can see how six times a year would be the minimum.
Re: ( Score: 2 )serviscope_minor ( 664417 ) writes:Fuck no. Cost of full automation: $4m Cost of manual entry: $0 Opportunity cost of manual entry: $800/year
At worse, pay for an accountant, if you can get one that cheaply. Bear in mind talking to them incurs most of that opportunity cost anyway.
Re: ( Score: 2 )raymorris ( 2726007 ) writes:Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring
There's nothing wrong with not wnting to run your own business, it's not for most people, and even if it was, the numbers don't add up. But putting the scare qoutes in like that makes it sound like you have huge chip on your shoulder. Those things re just as essential to the business as your work and without them you wouldn't have the steady 9:30-4:30 with good paycheck.
Important, and dumb. ( Score: 3 , Informative)raymorris ( 2726007 ) writes:Of course they are important. I wouldn't have done those things if they weren't important!
I frequently have friends say things like "I love baking. I can't get enough of baking. I'm going to open a bakery.". I ask them "do you love dealing with taxes, every month? Do you love contract law? Employment law? Marketing? Accounting?" If you LOVE baking, the smart thing to do is to spend your time baking. Running a start-up business, you're not going to do much baking.
If you love marketing, employment law, taxes
Four tips for a better job. Who has more? ( Score: 3 )I can tell you a few things that have worked for me. I'll go in chronological order rather than priority order.
Make friends in the industry you want to be in. Referrals are a major way people get jobs.
Look at the job listings for jobs you'd like to have and see which skills a lot of companies want, but you're missing. For me that's Java. A lot companies list Java skills and I'm not particularly good with Java. Then consider learning the skills you lack, the ones a lot of job postings are looking for.
Certifi
goose-incarnated ( 1145029 ) , Sunday April 29, 2018 @02:34PM ( #56524475 ) Journal
Re: older generations already had a term for this ( Score: 5 , Insightful)cheekyboy ( 598084 ) writes:You don't understand the point of an ORM do you? I'd suggest reading why they existThey exist because programmers value code design more than data design. ORMs are the poster-child for square-peg-round-hole solutions, which is why all ORMs choose one of three different ways of squashing hierarchical data into a relational form, all of which are crappy.
If the devs of the system (the ones choosing to use an ORM) had any competence at all they'd design their database first because in any application that uses a database the database is the most important bit, not the OO-ness or Functional-ness of the design.
Over the last few decades I've seen programs in a system come and go; a component here gets rewritten, a component there gets rewritten, but you know what? They all have to work with the same damn data.
You can more easily switch out your code for new code with new design in a new language, than you can switch change the database structure. So explain to me why it is that you think the database should be mangled to fit your OO code rather than mangling your OO code to fit the database?
im sick of reinventors and new frameworks ( Score: 3 )gbjbaanb ( 229885 ) writes:Stick to the one thing for 10-15years. Often all this new shit doesn't do jack different to the old shit, its not faster, its not better. Every dick wants to be famous so make another damn library/tool with his own fancy name and feature, instead of enhancing an existing product.
Re: ( Score: 2 )Greyfox ( 87712 ) writes:amen to that.
Or kids who can't hack the main stuff, suddenly discover the cool new, and then they can pretend they're "learning" it, and when the going gets tough (as it always does) they can declare the tech to be pants and move to another.
hence we had so many people on the bandwagon for functional programming, then dumped it for ruby on rails, then dumped that for Node.js, not sure what they're on at currently, probably back to asp.net.
Re: ( Score: 2 )djinn6 ( 1868030 ) writes:How much code do you have to reuse before you're not really programming anymore? When I started in this business, it was reasonably possible that you could end up on a project that didn't particularly have much (or any) of an operating system. They taught you assembly language and the process by which the system boots up, but I think if I were to ask most of the programmers where I work, they wouldn't be able to explain how all that works...
Re: ( Score: 2 )Junta ( 36770 ) writes:It really feels like if you know what you're doing it should be possible to build a team of actually good programmers and put everyone else out of business by actually meeting your deliverables, but no one has yet. I wonder why that is.You mean Amazon, Google, Facebook and the like? People may not always like what they do, but they manage to get things done and make plenty of money in the process. The problem for a lot of other businesses is not having a way to identify and promote actually good programmers. In your example, you could've spent 10 minutes fixing their query and saved them days of headache, but how much recognition will you actually get? Where is your motivation to help them?
Re: ( Score: 2 )cheekyboy ( 598084 ) writes:It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. Yes it can happen that baseless boasts can be called out over time by a large enough mass of feedback from competent peers, but it takes a *lot* to overcome the tendency for them to have faith in the boasts.
It does correlate stron
Re: ( Score: 2 )Junta ( 36770 ) writes:And all these modern coders forget old lessons, and make shit stuff, just look at instagram windows app, what a load of garbage shit, that us old fuckers could code in 2-3 weeks.
Instagram - your app sucks, cookie cutter coders suck, no refinement, coolness. Just cheap ass shit, with limited usefulness.
Just like most of commercial software that's new - quick shit.
Oh and its obvious if your an Indian faking it, you haven't worked in 100 companies at the age of 29.
Re: ( Score: 2 )molarmass192 ( 608071 ) , Sunday April 29, 2018 @02:15AM ( #56522359 ) Homepage JournalHere's another problem, if faced with a skilled team that says "this will take 6 months to do right" and a more naive team that says "oh, we can slap that together in a month", management goes with the latter. Then the security compromises occur, then the application fails due to pulling in an unvetted dependency update live into production. When the project grows to handling thousands instead of dozens of users and it starts mysteriously folding over and the dev team is at a loss, well the choice has be
Re:older generations already had a term for this ( Score: 5 , Interesting)AmiMoJo ( 196126 ) writes: < mojo@@@world3...net > on Sunday April 29, 2018 @05:15AM ( #56522597 ) Homepage JournalThese restrictions is a large part of what makes Arduino programming "fun". If you don't plan out your memory usage, you're gonna run out of it. I cringe when I see 8MB web pages of bloated "throw in everything including the kitchen sink and the neighbor's car". Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something. Of course, I don't have time to review it but I'm sure everybody else has peer reviewed it for flaws and exploits line by line.
Re:older generations already had a term for this ( Score: 4 , Informative)locketine ( 1101453 ) writes:Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something.Of course. What is the business case for making it efficient? Those massive frameworks are cached by the browser and run on the client's system, so cost you nothing and save you time to market. Efficient costs money with no real benefit to the business.
If we want to fix this, we need to make bloat have an associated cost somehow.
Re: older generations already had a term for this ( Score: 2 )serviscope_minor ( 664417 ) writes:My company is dealing with the result of this mentality right now. We released the web app to the customer without performance testing and doing several majorly inefficient things to meet deadlines. Once real load was put on the application by users with non-ideal hardware and browsers, the app was infuriatingly slow. Suddenly our standard sub-40 hour workweek became a 50+ hour workweek for months while we fixed all the inefficient code and design issues.
So, while you're right that getting to market and opt
Re: ( Score: 2 )serviscope_minor ( 664417 ) writes:In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz
We still have them. In fact some of the systems I've programmed have been more resource limited than the gloriously spacious 32KiB memory of the BBC model B. Take the PIC12F or 10F series. A glorious 64 bytes of RAM, max clock speed of 16MHz, but not unusual to run it 32kHz.
Re: ( Score: 2 )tlhIngan ( 30335 ) writes:So what are the uses for that? I am curious what things people have put these to use for.
It's hard to determine because people don't advertise use of them at all. However, I know that my electric toothbrush uses an Epson 4 bit MCU of some description. It's got a status LED, basic NiMH batteryb charger and a PWM controller for an H Bridge. Braun sell a *lot* of electric toothbrushes. Any gadget that's smarter than a simple switch will probably have some sort of basic MCU in it. Alarm system components, sensor
Re: ( Score: 3 , Insightful)Anonymous Coward writes:b) No computer ever ran at 1.023 MHz. It was either a nice multiple of 1Mhz or maybe a multiple of 3.579545Mhz (ie. using the TV output circuit's color clock crystal to drive the CPU).Well, it could be used to drive the TV output circuit, OR, it was used because it's a stupidly cheap high speed crystal. You have to remember except for a few frequencies, most crystals would have to be specially cut for the desired frequency. This occurs even today, where most oscillators are either 32.768kHz (real time clock
Re: ( Score: 2 , Interesting)Wraithlyn ( 133796 ) writes:Yeah, nice talk. You could have stopped after the first sentence. The other AC is referring to the Commodore C64 [wikipedia.org]. The frequency has nothing to do with crystal availability but with the simple fact that everything in the C64 is synced to the TV. One clock cycle equals 8 pixels. The graphics chip and the CPU take turns accessing the RAM. The different frequencies dictated by the TV standards are the reason why the CPU in the NTSC version of the C64 runs at 1.023MHz and the PAL version at 0.985MHz.
Re: ( Score: 2 )Anonymous Coward writes:LOL what exactly is so special about 16K RAM? https://yourlogicalfallacyis.c... [yourlogicalfallacyis.com]
I cut my teeth on a VIC20 (5K RAM), then later a C64 (which ran at 1.023MHz...)
Re: ( Score: 2 , Interesting)wierd_w ( 1375923 ) , Saturday April 28, 2018 @11:58PM ( #56522075 )Commodore 64 for the win. I worked for a company that made detection devices for the railroad, things like monitoring axle temperatures, reading the rail car ID tags. The original devices were made using Commodore 64 boards using software written by an employee at the one rail road company working with them.
The company then hired some electrical engineers to design custom boards using the 68000 chips and I was hired as the only programmer. Had to rewrite all of the code which was fine...
... A job fair can easily test this competency. ( Score: 4 , Interesting)ShanghaiBill ( 739463 ) , Sunday April 29, 2018 @01:50AM ( #56522321 )Many of these languages have an interactive interpreter. I know for a fact that Python does.
So, since job-fairs are an all day thing, and setup is already a thing for them -- set up a booth with like 4 computers at it, and an admin station. The 4 terminals have an interactive session with the interpreter of choice. Every 20min or so, have a challenge for "Solve this problem" (needs to be easy and already solved in general. Programmers hate being pimped without pay. They don't mind tests of skill, but hate being pimped. Something like "sort this array, while picking out all the prime numbers" or something.) and see who steps up. The ones that step up have confidence they can solve the problem, and you can quickly see who can do the work and who can't.
The ones that solve it, and solve it to your satisfaction, you offer a nice gig to.
Re:... A job fair can easily test this competency. ( Score: 5 , Informative)wierd_w ( 1375923 ) writes:Then you get someone good at sorting arrays while picking out prime numbers, but potentially not much else.The point of the test is not to identify the perfect candidate, but to filter out the clearly incompetent. If you can't sort an array and write a function to identify a prime number, I certainly would not hire you. Passing the test doesn't get you a job, but it may get you an interview
... where there will be other tests.
Re: ( Score: 2 )wierd_w ( 1375923 ) , Sunday April 29, 2018 @04:02AM ( #56522443 )BINGO!
(I am not even a professional programmer, but I can totally perform such a trivially easy task. The example tests basic understanding of loop construction, function construction, variable use, efficient sorting, and error correction-- especially with mixed type arrays. All of these are things any programmer SHOULD now how to do, without being overly complicated, or clearly a disguised occupational problem trying to get a free solution. Like I said, programmers hate being pimped, and will be turned off
Re: ... A job fair can easily test this competency ( Score: 5 , Insightful)luis_a_espinal ( 1810296 ) writes:Again, the quality applicant and the code monkey both have something the fakers do not-- Actual comprehension of what a program is, and how to create one.
As Bill points out, this is not the final exam. This is the "Oh, I see you do actually know how to program-- show me more" portion of the process. This is the part that HR drones are not capable of performing, due to Dunning-Krueger. Those that are actually, REALLY competent will do more than just satisfy the requirements of the challenge, they will provide actually working solutions to the challenge that properly validate their input, and return proper error states if the input is invalid, etc-- You can learn a LOT about a potential hire by observing their work. *THAT* is what this is really about. The triviality of the problem is a necessity, because you ***DON'T*** try to get free solutions out of people.
I realize that may be difficult for you to comprehend, but you *DON'T* do that. The job fair is to let people know that you have a position available, and try to curry interest in people to apply. A successful pre-screening is confidence building, and helps the potential hire to feel that your company is actually interested in actually hiring somebody, and not just fucking off in the booth, to cover for "failing to find somebody" and then "Getting yet another H1B". It gives them a chance to show you what they can do. That is what it is for, and what it does. It also excludes the fakers that this article is about-- The ones that can talk a good talk, but could not program a simple boolean check condition if their life depended on it.
If it were not for the time constraints of a job fair (usually only 2 days, and in that time you need to try and pre-screen as many as possible), I would suggest a tiered challenge, with progressively harder challenges, where you hand out resumes to the ones that make it to the top 3 brackets, but that is not the way the world works.
Re: ( Score: 2 )PaulRivers10 ( 4110595 ) writes:This in my opinion is really a waste of time. Challenges like this have to be so simple they can be done walking up to a booth are not likely to filter the "all talks" any better than a few interview questions could (imperson so the candidate can't just google it).Tougher more involved stuff isn't good either it gives a huge advantage to the full time job hunter, the guy or gal that already has a 9-5 and a family that wants to seem them has not got time for games. We have been struggling with hiring where I work ( I do a lot of the interviews ) and these are the conclusions we have reached
You would be surprised at the number of people with impeccable-looking resumes failing at something as simple as the FizzBuzz test [codinghorror.com]
Re: ... A job fair can easily test this competenc ( Score: 2 )Hognoxious ( 631665 ) writes:The only thing fuzzbuzz tests is "have you done fizzbuzz before"? It's a short question filled with every petty trick the author could think ti throw in there. If you haven't seen the tricks they trip you up for no reason related to your actual coding skills. Once you have seen them they're trivial and again unrelated to real work. Fizzbuzz is best passed by someone aiming to game the interview system. It passes people gaming it and trips up people who spent their time doing on the job real work.
Re: ( Score: 2 )luis_a_espinal ( 1810296 ) , Sunday April 29, 2018 @07:49AM ( #56522861 ) Homepagethey trip you up for no reason related to your actual codung skills.Bullshit!
filter the lame code monkeys ( Score: 4 , Informative)ShanghaiBill ( 739463 ) writes:Lame monkey tests select for lame monkeys.A good programmer first and foremost has a clean mind. Experience suggests puzzle geeks, who excel at contrived tests, are usually sloppy thinkers.
No. Good programmers can trivially knock out any of these so-called lame monkey tests. It's lame code monkeys who can't do it. And I've seen their work. Many night shifts and weekends I've burned trying to fix their shit because they couldn't actually do any of the things behind what you call "lame monkey tests", like:
pulling expensive invariant calculations out of loops using for loops to scan a fucking table to pull rows or calculate an aggregate when they could let the database do what it does best with a simple SQL statement systems crashing under actual load because their shitty code was never stress tested ( but it worked on my dev box! .) again with databases, having to redo their schemas because they were fattened up so much with columns like VALUE1, VALUE2,
... VALUE20 (normalize you assholes!) chatting remote APIs - because these code monkeys cannot think about the need for bulk operations in increasingly distributed systems. storing dates in unsortable strings because the idiots do not know most modern programming languages have a date data type. Oh and the most important, off-by-one looping errors. I see this all the time, the type of thing a good programmer can spot on quickly because he or she can do the so-called "lame monkey tests" that involve arrays and sorting.
I've seen the type: "I don't need to do this shit because I have business knowledge and I code for business and IT not google", and then they go and code and fuck it up... and then the rest of us have to go clean up their shit at 1AM or on weekends.
If you work as an hourly paid contractor cleaning that crap, it can be quite lucrative. But sooner or later it truly sucks the energy out of your soul.
So yeah, we need more lame monkey tests
... to filter the lame code monkeys.
Re: ( Score: 3 )Junta ( 36770 ) writes:Someone could Google the problem with the phone then step up and solve the challenge.If given a spec, someone can consistently cobble together working code by Googling, then I would love to hire them. That is the most productive way to get things done.
There is nothing wrong with using external references. When I am coding, I have three windows open: an editor, a testing window, and a browser with a Stackoverflow tab open.
Re: ( Score: 2 )bobstreo ( 1320787 ) writes:Yeah, when we do tech interviews, we ask questions that we are certain they won't be able to answer, but want to see how they would think about the problem and what questions they ask to get more data and that they don't just fold up and say "well that's not the sort of problem I'd be thinking of" The examples aren't made up or anything, they are generally selection of real problems that were incredibly difficult that our company had faced before, that one may not think at first glance such a position would
Nothing worse ( Score: 2 )Anonymous Coward , Sunday April 29, 2018 @12:34AM ( #56522157 )than spending weeks interviewing "good" candidates for an opening, selecting a couple and hiring them as contractors, then finding out they are less than unqualified to do the job they were hired for.
I've seen it a few times, Java "experts", Microsoft "experts" with years of experience on their resumes, but completely useless in coding, deployment or anything other than buying stuff from the break room vending machines.
That being said, I've also seen projects costing hundreds of thousands of dollars, with y
Re:Nothing worse ( Score: 4 , Insightful)Anonymous Coward writes:The moment you said "contractors", and you have lost any sane developer. Keep swimming, its not a fish.
Re: ( Score: 2 , Informative)Lanthanide ( 4982283 ) writes:I agree with this. I consider myself to be a good programmer and I would never go into contractor game. I also wonder, how does it take you weeks to interview someone and you still can't figure out if the person can't code? I could probably see that in 15 minutes in a pair coding session.
Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. Their technical supp
Re: ( Score: 2 )Anonymous Coward writes:It's weeks to interview multiple different candidates before deciding on 1 or 2 of them. Not weeks per person.
Re: ( Score: 3 , Insightful)pauljlucas ( 529435 ) writes:That being said, I've also seen projects costing hundreds of thousands of dollars, with years of delays from companies like Oracle, Sun, SAP, and many other "vendors"Software development is a hard thing to do well, despite the general thinking of technology becoming cheaper over time, and like health care the quality of the goods and services received can sometimes be difficult to ascertain. However, people who don't respect developers and the problems we solve are very often the same ones who continually frustrate themselves by trying to cheap out, hiring outsourced contractors, and then tearing their hair out when sub par results are delivered, if anything is even del
Re: ( Score: 2 )VeryFluffyBunny ( 5037285 ) writes:As part of your interview process, don't you have candidates code a solution to a problem on a whiteboard? I've interviewed lots of "good" candidates (on paper) too, but they crashed and burned when challenged with a coding exercise. As a result, we didn't make them job offers.
I do the opposite ( Score: 2 )Tony Isaac ( 1301187 ) writes:I'm not a great coder but good enough to get done what clients want done. If I'm not sure or don't think I can do it, I tell them. I think they appreciate the honesty. I don't work in a tech-hub, startups or anything like that so I'm not under the same expectations and pressures that others may be.
Bigger building blocks ( Score: 2 )Anonymous Coward writes:OK, so yes, I know plenty of programmers who do fake it. But stitching together components isn't "fake" programming.
Back in the day, we had to write our own code to loop through an XML file, looking for nuggets. Now, we just use an XML serializer. Back then, we had to write our own routines to send TCP/IP messages back and forth. Now we just use a library.
I love it! I hated having to make my own bricks before I could build a house. Now, I can get down to the business of writing the functionality I want, ins
Re: ( Score: 2 , Insightful)Tony Isaac ( 1301187 ) writes:But, I suspect you could write the component if you had to. That makes you a very different user of that component than someone who just knows it as a magic black box.
Because of this, you understand the component better and have real knowledge of its strengths and limitations. People blindly using components with only a cursory idea of their internal operation often cause major performance problems. They rarely recognize when it is time to write their own to overcome a limitation (or even that it is possibl
Re: ( Score: 2 )thesupraman ( 179040 ) writes:You're right on all counts. A person who knows how the innards work, is better than someone who doesn't, all else being equal. Still, today's world is so specialized that no one can possibly learn it all. I've never built a processor, as you have, but I still have been able to build a DNA matching algorithm for a major DNA lab.
I would argue that anyone who can skillfully use off-the-shelf components can also learn how to build components, if they are required to.
Ummm. ( Score: 2 )Tony Isaac ( 1301187 ) writes:1, 'Back in the Day' there was no XML, XMl was not very long ago.
2, its a parser, a serialiser is pretty much the opposite (unless this weeks fashion has redefined that.. anything is possible).
3, 'Back then' we didnt have TCP stacks...But, actually I agree with you. I can only assume the author thinks there are lots of fake plumbers because they dont cast their own toilet bowels from raw clay, and use pre-build fittings and pipes! That car mechanics start from raw steel scrap and a file.. And that you need
Re: ( Score: 2 )Tony Isaac ( 1301187 ) , Sunday April 29, 2018 @01:46AM ( #56522313 ) HomepageFor the record, XML was invented in 1997, you know, in the last century! https://en.wikipedia.org/wiki/... [wikipedia.org]
And we had a WinSock library in 1992. https://en.wikipedia.org/wiki/... [wikipedia.org]Yes, I agree with you on the "middle ground." My reaction was to the author's point that "not knowing how to build the components" was the same as being a "fake programmer."
Re:Bigger building blocks ( Score: 5 , Interesting)frank_adrian314159 ( 469671 ) writes:If I'm a plumber, and I don't know anything about the engineering behind the construction of PVC pipe, I can still be a good plumber. If I'm an electrician, and I don't understand the role of a blast furnace in the making of the metal components, I can still be a good electrician.
The analogy fits. If I'm a programmer, and I don't know how to make an LZW compression library, I can still be a good programmer. It's a matter of layers. These days, we specialize. You've got your low-level programmers that make the components, the high level programmers that put together the components, the graphics guys who do HTML/CSS, and the SQL programmers that just know about databases. Every person has their specialty. It's no longer necessary to be a low-level programmer, or jack-of-all-trades, to be "good."
If I don't know the layout of the IP header, I can still write quality networking software, and if I know XSLT, I can still do cool stuff with XML, even if I don't know how to write a good parser.
Re: ( Score: 3 )Tony Isaac ( 1301187 ) writes:I was with you until you said " I can still do cool stuff with XML".
Re: ( Score: 2 )careysub ( 976506 ) writes:LOL yeah I know it's all JSON now. I've been around long enough to see these fads come and go. Frankly, I don't see a whole lot of advantage of JSON over XML. It's not even that much more compact, about 10% or so. But the point is that the author laments the "bad old days" when you had to create all your own building blocks, and you didn't have a team of specialists. I for one don't want to go back to those days!
Re: ( Score: 3 )Cederic ( 9623 ) writes:The main advantage is that JSON is that it is consistent. XML has attributes, embedded optional stuff within tags. That was derived from the original SGML ancestor where is was thought to be a convenience for the human authors who were supposed to be making the mark-up manually. Programmatically it is a PITA.
Re: ( Score: 3 )Anonymous Coward writes:I got shit for decrying XML back when it was the trendy thing. I've had people apologise to me months later because they've realized I was right, even though at the time they did their best to fuck over my career because XML was the new big thing and I wasn't fully on board.
XML has its strengths and its place, but fuck me it taught me how little some people really fucking understand shit.
Silicon Valley is Only Part of the Tech Business ( Score: 2 , Informative)phantomfive ( 622387 ) writes:And a rather small part at that, albeit a very visible and vocal one full of the proverbial prima donas. However, much of the rest of the tech business, or at least the people working in it, are not like that. It's small groups of developers working in other industries that would not typically be considered technology. There are software developers working for insurance companies, banks, hedge funds, oil and gas exploration or extraction firms, national defense and many hundreds and thousands of other small
bonfire of fakers ( Score: 2 )Njovich ( 553857 ) , Sunday April 29, 2018 @05:35AM ( #56522641 )This is the reason I wish programming didn't pay so much....the field is better when it's mostly populated by people who enjoy programming.
Learn to code courses ( Score: 5 , Insightful)AndyKron ( 937105 ) writes:They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software.Kind of hard to take this article serious after saying gibberish like this. I would say most good programmers know that neither learn-to-code courses nor AI are going to make a dent in their income any time soon.
Me? No ( Score: 2 )Escogido ( 884359 ) , Sunday April 29, 2018 @06:59AM ( #56522777 )As a non-programmer Arduino and libraries are my friends
in the silly cone valley ( Score: 5 , Interesting)quonset ( 4839537 ) , Sunday April 29, 2018 @07:20AM ( #56522817 )There is a huge shortage of decent programmers. I have personally witnessed more than one phone "interview" that went like "have you done this? what about this? do you know what this is? um, can you start Monday?" (120K-ish salary range)
Partly because there are way more people who got their stupid ideas funded than good coders willing to stain their resume with that. partly because if you are funded, and cannot do all the required coding solo, here's your conundrum:
- top level hackers can afford to be really picky, so on one hand it's hard to get them interested, and if you could get that, they often want some ownership of the project. the plus side is that they are happy to work for lots of equity if they have faith in the idea, but that can be a huge "if".
- "good but not exceptional" senior engineers aren't usually going to be super happy, as they often have spouses and children and mortgages, so they'd favor job security over exciting ideas and startup lottery.
- that leaves you with fresh-out-of-college folks, which are really really a mixed bunch. some are actually already senior level of understanding without the experience, some are absolutely useless, with varying degrees in between, and there's no easy way to tell which is which early.
so the not-so-scrupulous folks realized what's going on, and launched multiple coding boot camps programmes, to essentially trick both the students into believing they can become a coder in a month or two, and also the prospective employers that said students are useful. so far it's been working, to a degree, in part because in such companies coding skill evaluation process is broken. but one can only hide their lack of value add for so long, even if they do manage to bluff their way into a job.
Duh! ( Score: 4 , Insightful)swb ( 14022 ) , Sunday April 29, 2018 @07:48AM ( #56522857 )All one had to do was look at the lousy state of software and web sites today to see this is true. It's quite obvious little to no thought is given on how to make something work such that one doesn't have to jump through hoops.
I have many times said the most perfect word processing program ever developed was WordPefect 5.1 for DOS. Ones productivity was astonishing. It just worked.
Now we have the bloated behemoth Word which does its utmost to get in the way of you doing your work. The only way to get it to function is to turn large portions of its "features" off, and even then it still insists on doing something other than what you told it to do.
Then we have the abomination of Windows 10, which is nothing but Clippy on 10X steroids. It is patently obvious the people who program this steaming pile have never heard of simplicity. Who in their right mind would think having to "search" for something is more efficient than going directly to it? I would ask the question if these people wander around stores "searching" for what they're looking for, but then I realize that's how their entire life is run. They search for everything online rather than going directly to the source. It's no wonder they complain about not having time to things. They're always searching.
Web sites are another area where these people have no clue what they're doing. Anything that might be useful is hidden behind dropdown menus, flyouts, popup bubbles and intriately designed mazes of clicks needed to get to where you want to go. When someone clicks on a line of products, they shouldn't be harassed about what part of the product line they want to look at. Give them the information and let the user go where they want.
This rant could go on, but this article explains clearly why we have regressed when it comes to software and web design. Instead of making things simple and easy to use, using the one or two brain cells they have, programmers and web designers let the software do what it wants without considering, should it be done like this?
Tech industry churn ( Score: 3 )DaMattster ( 977781 ) , Sunday April 29, 2018 @08:39AM ( #56522979 )The tech industry has a ton of churn -- there's some technological advancement, but there's an awful lot of new products turned out simply to keep customers buying new licenses and paying for upgrades.
This relentless and mostly phony newness means a lot of people have little experience with current products. People fake because they have no choice. The good ones understand the general technologies and problems they're meant to solve and can generally get up to speed quickly, while the bad ones are good at faking it but don't really know what they're doing. Telling the difference from the outside is impossible.
Sales people make it worse, promoting people as "experts" in specific products or implementations because the people have experience with a related product and "they're all the same". This burns out the people with good adaption skills.
Interesting ( Score: 3 )ElitistWhiner ( 79961 ) writes:From the summary, it sounds like a lot of programmers and software engineers are trying to develop the next big thing so that they can literally beg for money from the elite class and one day, hopefully, become a member of the aforementioned. It's sad how the middle class has been utterly decimated in the United States that some of us are willing to beg for scraps from the wealthy. I used to work in IT but I've aged out and am now back in school to learn automotive technology so that I can do something other than being a security guard. Currently, the only work I have been able to find has been in the unglamorous security field.
I am learning some really good new skills in the automotive program that I am in but I hate this one class called "Professionalism in the Shop." I can summarize the entire class in one succinct phrase, "Learn how to appeal to, and communicate with, Mr. Doctor, Mr. Lawyer, or Mr. Wealthy-man." Basically, the class says that we are supposed to kiss their ass so they keep coming back to the Audi, BMW, Mercedes, Volvo, or Cadillac dealership. It feels a lot like begging for money on behalf of my employer (of which very little of it I will see) and nothing like professionalism. Professionalism is doing the job right the first time, not jerking the customer off. Professionalism is not begging for a 5 star review for a few measly extra bucks but doing absolute top quality work. I guess the upshot is that this class will be the easiest 4.0 that I've ever seen.
There is something fundamentally wrong when the wealthy elite have basically demanded that we beg them for every little scrap. I can understand the importance of polite and professional interaction but this prevalent expectation that we bend over backwards for them crosses a line with me. I still suck it up because I have to but it chafes my ass to basically validate the wealthy man.
Natural talent... ( Score: 2 )fahrbot-bot ( 874524 ) writes:In 70's I worked with two people who had a natural talent for computer science algorithms
.vs. coding syntax. In the 90's while at COLUMBIA I worked with only a couple of true computer scientists out of 30 students. I've met 1 genius who programmed, spoke 13 languages, ex-CIA, wrote SWIFT and spoke fluent assembly complete with animated characters. According to the Bluff Book, everyone else without natural talent fakes it. In the undiluted definition of computer science, genetics roulette and intellectual d
Other book sells better and is more interesting ( Score: 2 )Anonymous Coward writes:New Book Describes 'Bluffing' Programmers in Silicon ValleyIt's not as interesting as the one about "fluffing" [urbandictionary.com] programmers.
Re: ( Score: 3 , Funny)luis_a_espinal ( 1810296 ) writes:Ah yes, the good old 80:20 rule, except it's recursive for programmers.
80% are shit, so you fire them. Soon you realize that 80% of the remaining 20% are also shit, so you fire them too. Eventually you realize that 80% of the 4% remaining after sacking the 80% of the 20% are also shit, so you fire them!
...
The cycle repeats until there's just one programmer left: the person telling the joke.
---
tl;dr: All programmers suck. Just ask them to review their own code from more than 3 years ago: they'll tell you that
Re: ( Score: 3 )Anonymous Coward , Sunday April 29, 2018 @01:17AM ( #56522241 )Who gives a fuck about lines? If someone gave me JavaScript, and someone gave me minified JavaScript, which one would I want to maintain?I donâ(TM)t care about your line savings, less isnâ(TM)t always better.
Because the world of programming is not centered about JavasScript and reduction of lines is not the same as minification. If the first thing that came to your mind was about minified JavaScript when you saw this conversation, you are certainly not the type of programmer I would want to inherit code from.
See, there's a lot of shit out there that is overtly redundant and unnecessarily complex. This is specially true when copy-n-paste code monkeys are left to their own devices for whom code formatting seems
Re:Most "Professional programmers" are useless. ( Score: 4 , Interesting)Anonymous Coward , Sunday April 29, 2018 @01:40AM ( #56522299 )I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. If you have a software team that contain 5 of these, you can easily beat a team of 100 average people, not only in cost but also in schedule, quality and features. In theory they are worth 20 times more than average employees, but in practise they are usually paid the same amount of money with few exceptions.
80% of people are the average. They can follow instructions and they can get the work done, but they don't see that something is broken and needs fixing if it works the way it has always worked. While it might seem so, these people are not worthless. There are a lot of tasks that these people are happily doing which the 10% don't want to do. E.g. simple maintenance work, implementing simple features, automating test cases etc. But if you let the top 10% lead the project, you most likely won't be needed that much of these people. Most work done by these people is caused by themselves, by writing bad software due to lack of good leader.
10% are just causing damage. I'm not talking about terrorists and criminals. I have seen software developers who have tried (their best?), but still end up causing just damage to the code that someone else needs to fix, costing much more than their own wasted time. You really must use code reviews if you don't know your team members, to find these people early.
Re:Most "Professional programmers" are useless. ( Score: 5 , Funny)raymorris ( 2726007 ) , Sunday April 29, 2018 @01:51AM ( #56522329 ) Journalto find these people earlyand promote them to management where they belong.
Seems about right. Constantly learning, studying ( Score: 5 , Insightful)gbjbaanb ( 229885 ) writes:That seems about right to me.
I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good at, but I'm good at my job, so say everyone I've worked with.)
I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that for twenty years.
I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.
Re: ( Score: 2 )m00sh ( 2538182 ) writes:I think I can guarantee that they are a lot better at their jobs than you think, and that you are a lot worse at your job than you think too.
Re: ( Score: 2 )raymorris ( 2726007 ) writes:That seems about right to me.I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good at, but I'm good at my job, so say everyone I've worked with.)
I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that for twenty years.
I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.
If you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not able to ask for their help.
You're not interviewing your peers. They are already in your team. You should be working together.
I've seen superstar programmers suck the life out of project by over-complicating things and not working together with others.
Which part? Learning makes you better? ( Score: 2 )phantomfive ( 622387 ) writes:You quoted a lot. Is there one part exactly do you have in mind? The thesis of my post is of course "constant learning, on purpose, makes you better"
> you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not able to ask for their help.
Are you saying that trying to learn means you can't ask for help, or was there something more specific? For me, trying to learn means asking.
Trying to learn, I've had the opportunity to ask for help from peop
Re: ( Score: 2 )complete loony ( 663508 ) writes:The difference between a smart programmer who succeeds and a stupid programmer who drops out is that the smart programmer doesn't give up.
Re: ( Score: 2 )serviscope_minor ( 664417 ) writes:In other words;
What is often mistaken for 20 years' experience, is just 1 year's experience repeated 20 times.
Re: ( Score: 2 )asifyoucare ( 302582 ) , Sunday April 29, 2018 @08:49AM ( #56522999 )10% are just causing damage. I'm not talking about terrorists and criminals.
Terrorists and criminals have nothing on those guys. I know guy who is one of those. Worse, he's both motivated and enthusiastic. He also likes to offer help and advice to other people who don't know the systems well.
Re:Most "Professional programmers" are useless. ( Score: 5 , Insightful)gweihir ( 88907 ) writes:Good point. To quote Kurt von Hammerstein-Equord:
"I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent -- their place is the General Staff. The next lot are stupid and lazy -- they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent -- he must not be entrusted with any responsibility because he will always cause only mischief."
Re: ( Score: 2 )apoc.famine ( 621563 ) writes:Oops. Good thing I never did anything military. I am definitely in the "clever and lazy" class.
Re: ( Score: 2 )Software_Dev_GL ( 5377065 ) writes:I was just thinking the same thing. One of my passions in life is coming up with clever ways to do less work while getting more accomplished.
Re: ( Score: 2 )gweihir ( 88907 ) writes:It's called the Pareto Distribution [wikipedia.org]. The number of competent people (people doing most of the work) in any given organization goes like the square root of the number of employees.
Re: ( Score: 2 )geoskd ( 321194 ) writes:Matches my observations. 10-15% are smart, can think independently, can verify claims by others and can identify and use rules in whatever they do. They are not fooled by things "everybody knows" and see standard-approaches as first approximations that, of course, need to be verified to work. They do not trust anything blindly, but can identify whether something actually work well and build up a toolbox of such things.
The problem is that in coding, you do not have a "(mass) production step", and that is the
Re: ( Score: 2 )Tablizer ( 95088 ) , Sunday April 29, 2018 @01:54AM ( #56522331 ) JournalIn basic concept I agree with your theory, it fits my own anecdotal experience well, but I find that your numbers are off. The top bracket is actually closer to 20%. The reason it seems so low is that a large portion of the highly competent people are running one programmer shows, so they have no co-workers to appreciate their knowledge and skill. The places they work do a very good job of keeping them well paid and happy (assuming they don't own the company outright), so they rarely if ever switch jobs.
The
Re:Most "Professional programmers" are useless. ( Score: 4 , Interesting)Cesare Ferrari ( 667973 ) writes:at least 70, probably 80, maybe even 90 percent of professional programmers should just fuck off and do something else as they are useless at programming.Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers.
Otherwise, I'd suggest longer training and education before they enter the industry. But that just narrows an already narrow window of use.
Re: ( Score: 2 )gweihir ( 88907 ) writes:Well, it does rather depend on which industry you work in - i've managed to find interesting programming jobs for 25 years, and there's no end in sight for interesting projects and new avenues to explore. However, this isn't for everyone, and if you have good personal skills then moving from programming into some technical management role is a very worthwhile route, and I know plenty of people who have found very interesting work in that direction.
Re: ( Score: 3 , Insightful)gweihir ( 88907 ) writes:I think that is a misinterpretation of the facts. Old(er) coders that are incompetent are just much more obvious and usually are also limited to technologies that have gotten old as well. Hence the 90% old coders that can actually not hack it and never really could get sacked at some time and cannot find a new job with their limited and outdated skills. The 10% that are good at it do not need to worry though. Who worries there is their employers when these people approach retirement age.
Re: ( Score: 2 )tomhath ( 637240 ) writes:My experience as an IT Security Consultant (I also do some coding, but only at full rates) confirms that. Most are basically helpless and many have negative productivity, because people with a clue need to clean up after them. "Learn to code"? We have far too many coders already.
Re: ( Score: 2 )DaMattster ( 977781 ) writes:You can't bluff you way through writing software, but many, many people have bluffed their way into a job and then tried to learn it from the people who are already there. In a marginally functional organization those incompetents are let go pretty quickly, but sometimes they stick around for months or years.
Apparently the author of this book is one of those, probably hired and fired several times before deciding to go back to his liberal arts roots and write a book.
Re: ( Score: 2 )gweihir ( 88907 ) writes:There are some mechanics that bluff their way through an automotive repair. It's the same damn thing
Re: ( Score: 2 )phantomfive ( 622387 ) writes:I think you can and this is by far not the first piece describing that. Here is a classic: https://blog.codinghorror.com/... [codinghorror.com]
Yet these people somehow manage to actually have "experience" because they worked in a role they are completely unqualified to fill.
Re: ( Score: 2 )Fiddling with JavaScript libraries to get a fancy dancy interface that makes PHB's happy is a sought-after skill, for good or bad. Now that we rely more on half-ass libraries, much of "programming" is fiddling with dark-grey boxes until they work good enough.This drives me crazy, but I'm consoled somewhat by the fact that it will all be thrown out in five years anyway.
Nov 01, 2008 | IEEE Software, pp.18-19
As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.
- One involves a forensic analysis of over 100,000 lines of old C and assembly code from about 1990, and I have to work on Windows XP.
- The other is a hack to translate code written in weird language L1 into weird language L2 with a program written in scripting language L3, where none of the L's even existed in 1990; this one uses Linux. Thus it's perhaps a bit surprising that I find myself relying on much the same toolset for these very different tasks.
... ... ...
Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.
But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.
On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well
Nov 01, 2008 | IEEE Software, pp.18-19
As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.
- One involves a forensic analysis of over 100,000 lines of old C and assembly code from about 1990, and I have to work on Windows XP.
- The other is a hack to translate code written in weird language L1 into weird language L2 with a program written in scripting language L3, where none of the L's even existed in 1990; this one uses Linux. Thus it's perhaps a bit surprising that I find myself relying on much the same toolset for these very different tasks.
... ... ...
Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.
But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.
On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well
Oct 02, 2017 | profile.theguardian.com
Terryl Dorian , 21 Sep 2017 13:26That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country!Ray D Wright -> RogTheDodge , , 21 Sep 2017 14:52I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore...Richard Livingstone -> KatieL , , 21 Sep 2017 14:50+++1 to all of that.Arne Babenhauserheide -> Evelita , , 21 Sep 2017 14:48Automated coding just pushes the level of coding further up the development food chain, rather than gets rid of it. It is the wrong approach for current tech. AI that is smart enough to model new problems and create their own descriptive and runnable language - hopefully after my lifetime but coming sometime.
What coding does not teach is how to improve our non-code infrastructure and how to keep it running (that's the stuff which actually moves things). Code can optimize stuff, but it needs actual actuators to affect reality.WyntonK , 21 Sep 2017 14:47Sometimes these actuators are actual people walking on top of a roof while fixing it.
Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model.TheEgg -> UncommonTruthiness , , 21 Sep 2017 14:43There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor .
If you want a high tech executive to suffer a stroke, mention the words "labor unions".
Mike_Dexter -> Evelita , , 21 Sep 2017 14:43The ship has sailed on this activity as a career.
Nope. Married to a highly-technical skillset, you can still make big bucks. I say this as someone involved in this kind of thing academically and our Masters grads have to beat the banks and fintech companies away with dog shits on sticks. You're right that you can teach anyone to potter around and throw up a webpage but at the prohibitively difficult maths-y end of the scale, someone suitably qualified will never want for a job.
In a similar vein, if you accept the argument that it does drive down wages, wouldn't the culprit actually be the multitudes of online and offline courses and tutorials available to an existing workforce?Terryl Dorian -> CountDooku , , 21 Sep 2017 14:42Funny you should pick medicine, law, engineering... 3 fields that are *not* taught in high school. The writer is simply adding "coding" to your list. So it seems you agree with his "garbage" argument after all.anticapitalist -> RogTheDodge , , 21 Sep 2017 14:42Key word is "good". Teaching everyone is just going to increase the pool of programmers code I need to fix. India isn't being hired for the quality, they're being hired for cheap labor. As for women sure I wouldn't mind more women around but why does no one say their needs to be more equality in garbage collection or plumbing? (And yes plumbers are a high paid professional).WithoutPurpose , 21 Sep 2017 14:42In the end I don't care what the person is, I just want to hire and work with the best and not someone I have to correct their work because they were hired by quota. If women only graduate at 15% why should IT contain more than that? And let's be a bit honest with the facts, of those 15% how many spend their high school years staying up all night hacking? Very few. Now the few that did are some of the better developers I work with but that pool isn't going to increase by forcing every child to program... just like sports aren't better by making everyone take gym class.
I ran a development team for 10 years and I never had any trouble hiring programmers - we just had to pay them enough. Every job would have at least 10 good applicants.oddbubble -> Tori Turner , , 21 Sep 2017 14:41Two years ago I decided to scale back a bit and go into programming (I can code real-time low latency financial apps in 4 languages) and I had four interviews in six months with stupidly low salaries. I'm lucky in that I can bounce between tech and the business side so I got a decent job out of tech.
My entirely anecdotal conclusion is that there is no shortage of good programmers just a shortage of companies willing to pay them.
I've worn many hats so far, I started out as a started out as a sysadmin, then I moved on to web development, then back end and now I'm doing test automation because I am on almost the same money for half the effort.peter nelson -> raffine , , 21 Sep 2017 14:38But the concepts won't. Good programming requires the ability to break down a task, organise the steps in performing it, identify parts of the process that are common or repetitive so they can be bundled together, handed-off or delegated, etc.Oliver Jones -> Trumbledon , , 21 Sep 2017 14:37These concepts can be applied to any programming language, and indeed to many non-software activities.
In the city maybe with a financial background, the exception.anticapitalist -> Ethan Hawkins , 21 Sep 2017 14:32Well to his point sort of... either everything will go php or all those entry level php developers will be on the street. A good Java or C developer is hard to come by. And to the others, being a being a developer, especially a good one, is nothing like reading and writing. The industry is already saturated with poor coders just doing it for a paycheck.peter nelson -> Tori Turner , 21 Sep 2017 14:31KatieL -> Mishal Almohaimeed , 21 Sep 2017 14:30I'm just going to say this once: not everyone with a computer science degree is a coder.And vice versa. I'm retiring from a 40-year career as a software engineer. Some of the best software engineers I ever met did not have CS degrees.
"already developing automated coding scripts. "raffine , 21 Sep 2017 14:29Pretty much the entire history of the software industry since FORAST was developed for the ORDVAC has been about desperately trying to make software development in some way possible without driving everyone bonkers.
The gulf between FORAST and today's IDE-written, type-inferring high level languages, compilers, abstracted run-time environments, hypervisors, multi-computer architectures and general tech-world flavour-of-2017-ness is truly immense[1].
And yet software is still fucking hard to write. There's no sign it's getting easier despite all that work.
Automated coding was promised as the solution in the 1980s as well. In fact, somewhere in my archives, I've got paper journals which include adverts for automated systems that would programmers completely redundant by writing all your database code for you. These days, we'd think of those tools as automated ORM generators and they don't fix the problem; they just make a new one -- ORM impedance mismatch -- which needs more engineering on top to fix...
The tools don't change the need for the humans, they just change what's possible for the humans to do.
[1] FORAST executed in about 20,000 bytes of memory without even an OS. The compile artifacts for the map-reduce system I built today are an astonishing hundred million bytes... and don't include the necessary mapreduce environment, management interface, node operating system and distributed filesystem...
Whatever they are taught today will be obsolete tomorrow.yannick95 -> savingUK , , 21 Sep 2017 14:27Wiretrip -> mcharts , , 21 Sep 2017 14:25"There are already top quality coders in China and India"AHAHAHAHAHAHAHAHAHAHAHA *rolls on the floor laughting* Yes........ 1%... and 99% of incredibly bad, incompetent, untalented one that produce cost 50% of a good developer but produce only 5% in comparison. And I'm talking with a LOT of practical experience through more than a dozen corporations all over the world which have been outsourcing to India... all have been disasters for the companies (but good for the execs who pocketed big bonuses and left the company before the disaster blows up in their face)
Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again...TomRoche , 21 Sep 2017 14:11Tech executives have pursued [the goal of suppressing workers' compensation] in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement.
Folks interested in the story of the Techtopus (less drily presented than in the links in this article) should check out Mark Ames' reporting, esp this overview article and this focus on the egregious Steve Jobs (whose canonization by the US corporate-funded media is just one more impeachment of their moral bankruptcy).
Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status.
Folks interested in H-1B and US technical visas more generally should head to Norm Matloff 's summary page , and then to his blog on the subject .
I have watched as schools run by trade unions have done the opposite for the 5 decades. By limiting the number of graduates, they were able to help maintain living wages and benefits. This has been stopped in my area due to the pressure of owners run "trade associations".savingUK , 21 Sep 2017 13:38During that same time period I have witnessed trade associations controlled by company owners, while publicising their support of the average employee, invest enormous amounts of membership fees in creating alliances with public institutions. Their goal has been that of flooding the labor market and thus keeping wages low. A double hit for the average worker because membership fees were paid by employees as well as those in control.
And so it goes....
Coding jobs are just as susceptible to being moved to lower cost areas of the world as hardware jobs already have. It's already happening. There are already top quality coders in China and India. There is a much larger pool to chose from and they are just as good as their western counterparts and work harder for much less money.Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology.
whitehawk66 , 21 Sep 2017 15:18
UncommonTruthiness , 21 Sep 2017 14:10I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children.
They would definitely not survive the zombie apocalypse.
P.S. not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread.
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!William Fitch III , 21 Sep 2017 13:52I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.
Hi: As I have said many times before, there is no shortage of people who fully understand the problem and can see all the connections.However, they all fall on their faces when it comes to the solution. To cut to the chase, Concentrated Wealth needs to go, permanently. Of course the challenge is how to best accomplish this.....
.....Bill
MostlyHarmlessD , , 21 Sep 2017 13:16
Damn engineers and their black and white world view, if they weren't so inept they would've unionized instead of being trampled again and again in the name of capitalism.mcharts -> Aldous0rwell , , 21 Sep 2017 13:07Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact.Woe to our children and grandchildren.
Where's Bernie Sanders when we need him.
Oct 03, 2017 | discussion.theguardian.com
Richard Livingstone -> Mishal Almohaimeed , 21 Sep 2017 14:46
Wrong again, that approach has been tried since the 80s and will keep failing only because software development is still more akin to a technical craft than an engineering discipline. The number of elements required to assemble a working non trivial system is way beyond scriptable.freeandfair -> Taylor Dotson , 21 Sep 2017 14:26Wiretrip -> Mishal Almohaimeed , 21 Sep 2017 14:20> That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?You don't believe there will be robots to do plumbing and cleaning? The cleaner's job will be to program robots to do what they need.
CEOs? Absolutely.English teachers? Both of my kids have school laptops and everything is being done on the computers. The teachers use software and create websites and what not. Yes, even English teachers.
Not knowing / understanding how to code will be the same as not knowing how to use Word/ Excel. I am assuming there are people who don't, but I don't know any above the age of 6.
We've had 'automated coding scripts' for years for small tasks. However, anyone who says they're going to obviate programmers, analysts and designers doesn't understand the software development process.Ethan Hawkins -> David McCaul , 21 Sep 2017 13:22Even if expert systems (an 80's concept, BTW) could code, we'd still have a huge need for managers. The hard part of software isn't even the coding. It's determining the requirements and working with clients. It will require general intelligence to do 90% of what we do right now. The 10% we could automate right now, mostly gets in the way. I agree it will change, but it's going to take another 20-30 years to really happen.Mishal Almohaimeed -> PolydentateBrigand , , 21 Sep 2017 13:17wrong, software companies are already developing automated coding scripts. You'll get a bunch of door to door knives salespeople once the dust settles that's what you'll get.freeandfair -> rgilyead , , 21 Sep 2017 14:22> In 20 years time AI will be doing the codingPossible, but your still have to understand how AI operates and what it can and cannot do.
Oct 03, 2017 | discussion.theguardian.com
Coding has little or nothing to do with Silicon Valley. They may or may not have ulterior motives, but ultimately they are nothing in the scheme of things.oldzealand, 21 Sep 2017 14:58I disagree with teaching coding as a discrete subject. I think it should be combined with home economics and woodworking because 90% of these subjects consist of transferable skills that exist in all of them. Only a tiny residual is actually topic-specific.
In the case of coding, the residual consists of drawing skills and typing skills. Programming language skills? Irrelevant. You should choose the tools to fit the problem. Neither of these needs a computer. You should only ever approach the computer at the very end, after you've designed and written the program.
Is cooking so very different? Do you decide on the ingredients before or after you start? Do you go shopping half-way through cooking an omelette?
With woodwork, do you measure first or cut first? Do you have a plan or do you randomly assemble bits until it does something useful?
Real coding, taught correctly, is barely taught at all. You teach the transferable skills. ONCE. You then apply those skills in each area in which they apply.
What other transferable skills apply? Top-down design, bottom-up implementation. The correct methodology in all forms of engineering. Proper testing strategies, also common across all forms of engineering. However, since these tests are against logic, they're a test of reasoning. A good thing to have in the sciences and philosophy.
Technical writing is the art of explaining things to idiots. Whether you're designing a board game, explaining what you like about a house, writing a travelogue or just seeing if your wild ideas hold water, you need to be able to put those ideas down on paper in a way that exposes all the inconsistencies and errors. It doesn't take much to clean it up to be readable by humans. But once it is cleaned up, it'll remain free of errors.
So I would teach a foundation course that teaches top-down reasoning, bottom-up design, flowcharts, critical path analysis and symbolic logic. Probably aimed at age 7. But I'd not do so wholly in the abstract. I'd have it thoroughly mixed in with one field, probably cooking as most kids do that and it lacks stigma at that age.
I'd then build courses on various crafts and engineering subjects on top of that, building further hierarchies where possible. Eliminate duplication and severely reduce the fictions we call disciplines.
I used to employ 200 computer scientists in my business and now teach children so I'm apparently as guilty as hell. To be compared with a carpenter is, however, a true compliment, if you mean those that create elegant, aesthetically-pleasing, functional, adaptable and long-lasting bespoke furniture, because our crafts of problem-solving using limited resources in confined environments to create working, life-improving artifacts both exemplify great human ingenuity in action. Capitalism or no.peter nelson, 21 Sep 2017 14:29Fanastril, 21 Sep 2017 14:13"But coding is not magic. It is a technical skill, akin to carpentry."But some people do it much better than others. Just like journalism. This article is complete nonsense, as I discuss in another comment. The author might want to consider a career in carpentry.
NDReader, 21 Sep 2017 14:12"But coding is not magic. It is a technical skill, akin to carpentry."It is a way of thinking. Perhaps carpentry is too, but the arrogance of the above statement shows a soul who is done thinking.
MostlyHarmlessD, 21 Sep 2017 13:08"But coding is not magic. It is a technical skill, akin to carpentry."I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers.
Many people can write, but few become journalists, and fewer still become real authors.
A carpenter!? Good to know that engineers are still thought of as jumped up tradesmen.
Oct 02, 2017 | profile.theguardian.com
Wiretrip -> Mark Mauvais , 21 Sep 2017 14:23Yes, 'engineers' (and particularly mathematicians) write appalling code.Trumbledon , 21 Sep 2017 14:23A good developer can easily earn £600-800 per day, which suggests to me that they are in high demand, and society needs more of them.Wiretrip -> KatieL , 21 Sep 2017 14:22Agreed, to many people 'coding' consists of copying other people's JavaScript snippets from StackOverflow... I tire of the many frauds in the business...stratplaya , 21 Sep 2017 14:21You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field.peter nelson -> UncommonTruthiness , 21 Sep 2017 14:21Fanastril -> Taylor Dotson , 21 Sep 2017 14:17The ship has sailed on this activity as a career.
Oh, rubbish. I'm in the process of retiring from my job as an Android software designer so I'm tasked with hiring a replacement for my organisation. It pays extremely well, the work is interesting, and the company is successful and serves an important worldwide industry.
Still, finding highly-qualified people is hard and they get snatched up in mid-interview because the demand is high. Not only that but at these pay scales, we can pretty much expect the Guardian will do yet another article about the unconscionable gap between what rich, privileged techies like software engineers make and everyone else.
Really, we're damned if we do and damned if we don't. If tech workers are well-paid we're castigated for gentrifying neighbourhoods and living large, and yet anything that threatens to lower what we're paid produces conspiracy-theory articles like this one.
I learned to cook in school. Was there a shortage of cooks? No. Did I become a professional cook? No. but I sure as hell would not have missed the skills I learned for the world, and I use them every day.KatieL -> Taylor Dotson , 21 Sep 2017 14:13Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to.youngsteveo -> Taylor Dotson , 21 Sep 2017 14:12I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it.Fanastril -> Taylor Dotson , 21 Sep 2017 14:11It is not zero-sum: If you teach something empowering, like programming, motivating is a lot easier, and they will learn more.UncommonTruthiness , 21 Sep 2017 14:10The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!KatieL -> Taylor Dotson , 21 Sep 2017 14:10I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented.
Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.
peter nelson , 21 Sep 2017 14:09"intelligence, creativity, diligence, communication ability, or anything else that a job"None of those are any use if, when asked to turn your intelligent, creative, diligent, communicated idea into some software, you perform as well as most candidates do at simple coding assessments... and write stuff that doesn't work.
kcrane , 21 Sep 2017 14:07At its root, the campaign for code education isn't about giving the next generation a shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.
Of course the writer does not offer the slightest shred of evidence to support the idea that this is the actual goal of these programs. So it appears that the tinfoil-hat conspiracy brigade on the Guardian is operating not only below the line, but above it, too.
The fact is that few of these students will ever become software engineers (which, incidentally, is my profession) but programming skills are essential in many professions for writing little scripts to automate various tasks, or to just understand 21st century technology.
Sadly this is another article by a partial journalist who knows nothing about the software industry, but hopes to subvert what he had read somewhere to support a position he had already assumed. As others had said, understanding coding had already become akin to being able to use a pencil. It is a basic requirement of many higher level roles.DebuggingLife , 21 Sep 2017 14:06But knowing which end of a pencil to put on the paper (the equivalent of the level of coding taught in schools) isn't the same as being an artist. Moreover anyone who knows the field recognises that top coders are gifted, they embody genius. There are coding Caravaggio's out there, but few have the experience to know that. No amount of teaching will produce high level coders from average humans, there is an intangible something needed, as there is in music and art, to elevate the merely good to genius.
All to say, however many are taught the basics, it won't push down the value of the most talented coders, and so won't reduce the costs of the technology industry in any meaningful way as it is an industry, like art, that relies on the few not the many.
Not all of those children will want to become programmers but at least the barrier to entry, - for more to at least experience it - will be lower.rgilyead , 21 Sep 2017 14:01Teaching music to only the children whose parents can afford music tuition means than society misses out on a greater potential for some incredible gifted musicians to shine through.
Moreover, learning to code really means learning how to wrangle with the practical application of abstract concepts, algorithms, numerical skills, logic, reasoning, etc. which are all transferrable skills some of which are not in the scope of other classes, certainly practically.
Like music, sport, literature etc. programming a computer, a website, a device, a smartphone is an endeavour that can be truly rewarding as merely a pastime, and similarly is limited only by ones imagination."...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck.Taylor Dotson -> cwblackwell , 21 Sep 2017 14:00Yeah, but mania over coding skills inevitably pushes over skills out of the curriculum (or deemphasizes it). Education is zero-sum in that there's only so much time and energy to devote to it. Hence, you need more than vague appeals to "enhancement," especially given the risks pointed out by the author.Taylor Dotson -> PolydentateBrigand , 21 Sep 2017 13:57Taylor Dotson -> WumpieJr , 21 Sep 2017 13:49"Talented coders will start new tech businesses and create more jobs."That could be argued for any skill set, including those found in the humanities and social sciences likely to pushed out by the mania over coding ability. Education is zero-sum: Time spent on one subject is time that invariably can't be spent learning something else.
martinusher , 21 Sep 2017 13:31"If they can't literally fix everything let's just get rid of them, right?"That's a strawman. His point is rooted in the recognition that we only have so much time, energy, and money to invest in solutions. One's that feel good but may not do anything distract us for the deeper structural issues in our economy. The probably with thinking "education" will fix everything is that it leaves the status quo unquestioned.
Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down.freeandfair , 21 Sep 2017 13:30To confuse things further there's various levels of skill that all look the same to the untutored eye. Suppose you wished to bridge a waterway. If that waterway was a narrow ditch then you could just throw a plank across. As the distance to be spanned got larger and larger eventually you'd have to abandon intuition for engineering and experience. Exactly the same issues happen with software but they're less tangible; anyone can build a small program but a complex system requires a lot of other knowledge (in my field, that's engineering knowledge -- coding is almost an afterthought).
Its a good idea to teach young people to code but I wouldn't raise their expectations of huge salaries too much. For children educating them in wider, more general, fields and abstract activities such as music will pay off huge dividends, far more than just teaching them whatever the fashionable language du jour is. (...which should be Logo but its too subtle and abstract, it doesn't look "real world" enough!).
I don't see this is an issue. Sure, there could be ulterior motives there, but anyone who wants to still be employed in 20 years has to know how to code . It is not that everyone will be a coder, but their jobs will either include part-time coding or will require understanding of software and what it can and cannot do. AI is going to be everywhere.WumpieJr , 21 Sep 2017 13:23What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.youngsteveo , 21 Sep 2017 13:16But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?
Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.
I'm not going to argue that the goal of mass education isn't to drive down wages, but the idea that the skills gap is a myth doesn't hold water in my experience. I'm a software engineer and manager at a company that pays well over the national average, with great benefits, and it is downright difficult to find a qualified applicant who can pass a rudimentary coding exam.A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed.
Oct 02, 2017 | discussion.theguardian.com
swelle , 21 Sep 2017 17:36I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the talent here' is the main excuse, though many 'older' (read: over 40) native-born tech workers will tell your that's plenty of talent here already, but even with the immigration hassles, H1B workers will be cheaper overall...Julian Williams , 21 Sep 2017 18:06
This is interesting. Indeed, I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security. However, these jobs are usually occupied and the incumbents are not likely to move on quickly. Road blocks are also put up by creating sub networks of engineers who ensure that some knowledge is not ubiquitous.FabBlondie -> peter nelson , 21 Sep 2017 17:42Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.
Still, the ability to write a computer program in an enabler, knowing how it works means you have an ability to imagine something and make it real. To me it is a bit like language, some people can use language to make more money than others, but it is still important to be able to have a basic level of understanding.
And yet I know a lot of people that has happened to. Better to replace a $125K a year programmer with one who will do the same, or even less, job for $50K.This could backfire if the programmers don't find the work or pay to match their expectations... Programmers, after all tend to make very good hackers if their minds are turned to it.freeandfair -> FabBlondie , 21 Sep 2017 18:23
peter nelson -> FabBlondie , 21 Sep 2017 18:23> While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.Well, I am a software architect and what he says sounds correct for a certain type of applications. Maybe you do a different type of programming.
freeandfair -> FabBlondie , 21 Sep 2017 18:22While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.
How else can you do it?
Java is popular because it's a very versatile language - On this list it's the most popular general-purpose programming language. (Above it javascript is just a scripting language and HTML/CSS aren't even programming languages) https://fossbytes.com/most-used-popular-programming-languages/ ... and below it you have to go down to C# at 20% to come to another general-purpose language, and even that's a Microsoft house language.
Also the "correct" choice of programming languages is also based on how many people in the shop know it so they maintain code that's written in it by someone else.
freeandfair -> mlzarathustra , 21 Sep 2017 18:20> job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training.Well, it is either that or the kids themselves who have to pay for it and they are even less prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the US. And the employer ideally should pay for the job related training, but again, it is not the case in the US.
theindyisbetter -> Game Cabbage , 21 Sep 2017 18:18> The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the buckPlenty of people care about the arts but people can't survive on what the arts pay. That was pretty much the case all through human history.
No. The amount of work is not a fixed sum. That's the lump of labour fallacy. We are not tied to the land.ConBrio , 21 Sep 2017 18:10Since newspaper are consolidating and cutting jobs gotta clamp down on colleges offering BA degrees, particularly in English Literature and journalism.LMichelle -> chillisauce , 21 Sep 2017 18:03And then... and...then...and...
This article focuses on the US schools, but I can imagine it's the same in the UK. I don't think these courses are going to be about creating great programmers capable of new innovations as much as having a work force that can be their own IT Help Desk.edmundberk -> FabBlondie , 21 Sep 2017 17:57They'll learn just enough in these classes to do that.
Then most companies will be hiring for other jobs, but want to make sure you have the IT skills to serve as your own "help desk" (although they will get no salary for their IT work).
I find that quite remarkable - 40 years ago you must have been using assembler and with hardly any memory to work with. If you blitzed through that without applying the thought processes described, well...I'm surprised.James Dey , 21 Sep 2017 17:55Funny. Every day in the Brexit articles, I read that increasing the supply of workers has negligible effect on wages.peter nelson -> peterainbow , 21 Sep 2017 17:54I was laid off at your age in the depths of the recent recession and I got a job. As I said in another posting, it usually comes down to fresh skills and good personal references who will vouch for your work-habits and how well you get on with other members of your team.Game Cabbage -> theindyisbetter , 21 Sep 2017 17:52The great thing about software , as opposed to many other jobs, is that it can be done at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show them what I've done.
The situation has a direct comparison to today. It has nothing to do with land. There was a certain amount of profit making work and not enough labour to satisfy demand. There is currently a certain amount of profit making work and in many situations (especially unskilled low paid work) too much labour.edmundberk , 21 Sep 2017 17:52So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?chillisauce , 21 Sep 2017 17:48Or is this the most obtuse argument yet for avoiding what everyone in tech knows - even more blatantly than in many other industries, wages are curtailed by offshoring; and in the US, by having offshoring centres on US soil.
Well, speaking as someone who spends a lot of time trying to find really good programmers... frankly there aren't that many about. We take most of ours from Eastern Europe and SE Asia, which is quite expensive, given the relocation costs to the UK. But worth it.Baobab73 , 21 Sep 2017 17:48So, yes, if more British kids learnt about coding, it might help a bit. But not much; the real problem is that few kids want to study IT in the first place, and that the tuition standards in most UK universities are quite low, even if they get there.
True......peter nelson -> rebel7 , 21 Sep 2017 17:47There was recently an programme/podcast on ABC/RN about the HUGE shortage in Australia of techies with specialized security skills.peter nelson -> jigen , 21 Sep 2017 17:46Robots, or AI, are already making us more productive. I can write programs today in an afternoon that would have taken me a week a decade or two ago.Quaestor , 21 Sep 2017 17:43I can create a class and the IDE will take care of all the accessors, dependencies, enforce our style-guide compliance, stub-in the documentation ,even most test cases, etc, and all I have to write is very-specific stuff required by my application - the other 90% is generated for me. Same with UI/UX - stubs in relevant event handlers, bindings, dependencies, etc.
Programmers are a zillion times more productive than in the past, yet the demand keeps growing because so much more stuff in our lives has processors and code. Your car has dozens of processors running lots of software; your TV, your home appliances, your watch, etc.
jamesupton , 21 Sep 2017 17:42Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible.
Getting children to learn how to write code, as part of core education, will be the first step to the long overdue revolution. The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.cjenk415 -> LMichelle , 21 Sep 2017 17:40did you misread? it seemed like he was emphasizing that learning to code, like learning art (and sports and languages), will help them develop skills that benefit them in whatever profession they choose.FabBlondie -> peter nelson , 21 Sep 2017 17:40While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done. And, FWIW, IMHO choosing the tool (programming language) might reasonably be expected to follow designing a solution, in practice this rarely happens. No, these days it's Java all the way, from day one.theindyisbetter -> Game Cabbage , 21 Sep 2017 17:40There was a fixed supply of land and a reduced supply of labour to work the land.LMichelle , 21 Sep 2017 17:39Nothing like then situation in a modern economy.
I'd advise parents that the classes they need to make sure their kids excel in are acting/drama. There is no better way to getting that promotion or increasing your pay like being a skilled actor in the job market. It's a fake it till you make it deal.theindyisbetter , 21 Sep 2017 17:36What a ludicrous argument.SheriffFatman -> Game Cabbage , 21 Sep 2017 17:36Let's not teach maths or science or literacy either - then anyone with those skills will earn more.
peter nelson -> peterainbow , 21 Sep 2017 17:32After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions
It also produced wage-control legislation (which admittedly failed to work).
if there were truly a shortage i wouldn't be unemployedLMichelle -> loveyy , 21 Sep 2017 17:26I've heard that before but when I've dug deeper I've usually found someone who either let their skills go stale, or who had some work issues.
Really? You think they are going to emphasize things like the importance of privacy and consumer rights?loveyy , 21 Sep 2017 17:25This really has to be one of the silliest articles I read here in a very long time.Game Cabbage , 21 Sep 2017 17:24
People, let your children learn to code. Even more, educate yourselves and start to code just for the fun of it - look at it like a game.
The more people know how to code the less likely they are to understand how stuff works. If you were ever frustrated by how impossible it seems to shop on certain websites, learn to code and you will be frustrated no more. You will understand the intent behind the process.
Even more, you will understand the inherent limitations and what is the meaning of safety. You will be able to better protect yourself in a real time connected world.Learning to code won't turn your kid into a programmer, just like ballet or piano classes won't mean they'll ever choose art as their livelihood. So let the children learn to code and learn along with them
Tipping power to employers in any profession by oversupply of labour is not a good thing. Bit of a macabre example here but...After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions and economic development for hundreds of years after this. Not suggesting a massive depopulation. But you can achieve the same effects by altering the power balance. With decades of Neoliberalism, the employers side of the power see-saw is sitting firmly in the mud and is producing very undesired results for the vast majority of people.Zuffle -> peterainbow , 21 Sep 2017 17:23Perhaps you're just not very good. I've been a developer for 20 years and I've never had more than 1 week of unemployment.Kevin P Brown -> peterainbow , 21 Sep 2017 17:20" at 55 finding it impossible to get a job"TheLane82 , 21 Sep 2017 17:20I am 59, and it is not just the age aspect it is the money aspect. They know you have experience and expectations, and yet they believe hiring someone half the age and half the price, times 2 will replace your knowledge. I have been contracting in IT for 30 years, and now it is obvious it is over. Experience at some point no longer mitigates age. I think I am at that point now.
Completely true! What needs to happen instead is to teach the real valuable subjects.peter nelson -> mlzarathustra , 21 Sep 2017 17:06Gender studies. Islamic studies. Black studies. All important issues that need to be addressed.
Dear, dear, I know, I know, young people today . . . just not as good as we were. Everything is just going down the loo . . . Just have a nice cuppa camomile (or chamomile if you're a Yank) and try to relax ... " hey you kids, get offa my lawn !"FabBlondie , 21 Sep 2017 17:06There are good reasons to teach coding. Too many of today's computer users are amazingly unaware of the technology that allows them to send and receive emails, use their smart phones, and use websites. Few understand the basic issues involved in computer security, especially as it relates to their personal privacy. Hopefully some introductory computer classes could begin to remedy this, and the younger the students the better.peterainbow -> brady , 21 Sep 2017 17:05Security problems are not strictly a matter of coding.
Security issues persist in tech. Clearly that is not a function of the size of the workforce. I propose that it is a function of poor management and design skills. These are not taught in any programming class I ever took. I learned these on the job and in an MBA program, and because I was determined.
Don't confuse basic workforce training with an effective application of tech to authentic needs.
How can the "disruption" so prized in today's Big Tech do anything but aggravate our social problems? Tech's disruption begins with a blatant ignorance of and disregard for causes, and believes to its bones that a high tech app will truly solve a problem it cannot even describe.
Kool Aid anyone?
indeed that idea has been around as long as cobol and in practice has just made things worse, the fact that many people outside of software engineering don;t seem to realise is that the coding itself is a relatively small part of the jobFabBlondie -> imipak , 21 Sep 2017 17:04Hurrah.peterainbow -> rebel7 , 21 Sep 2017 17:04so how many female and old software engineers are there who are unable to get a job, i'm one of them at 55 finding it impossible to get a job and unlike many 'developers' i know what i'm doingpeterainbow , 21 Sep 2017 17:02meanwhile the age and sex discrimination in IT goes on, if there were truly a shortage i wouldn't be unemployedJared Hall -> peter nelson , 21 Sep 2017 17:01Training more people for an occupation will result in more people becoming qualified to perform that occupation, irregardless of the fact that many will perform poorly at it. A CS degree is no guarantee of competency, but it is one of the best indicators of general qualification we have at the moment. If you can provide a better metric for analyzing the underlying qualifications of the labor force, I'd love to hear it.peter nelson -> FabBlondie , 21 Sep 2017 17:00Regarding your anecdote, while interesting, it poor evidence when compared to the aggregate statistical data analyzed in the EPI study.
Job-specific training is completely different.
Good grief. It's not job-specific training. You sound like someone who knows nothing about computer programming.
Designing a computer program requires analysing the task; breaking it down into its components, prioritising them and identifying interdependencies, and figuring out which parts of it can be broken out and done separately. Expressing all this in some programming language like Java, C, or C++ is quite secondary.
So once you learn to organise a task properly you can apply it to anything - remodeling a house, planning a vacation, repairing a car, starting a business, or administering a (non-software) project at work.
Oct 02, 2017 | discussion.theguardian.com
peter nelson -> c mm , 21 Sep 2017 19:49
Instant feedback is one of the things I really like about programming, but it's also the thing that some people can't handle. As I'm developing a program all day long the compiler is telling me about build errors or warnings or when I go to execute it it crashes or produces unexpected output, etc. Software engineers are bombarded all day with negative feedback and little failures. You have to be thick-skinned for this work.peter nelson -> peterainbow , 21 Sep 2017 19:42How is it shallow and lazy? I'm hiring for the real world so I want to see some real world accomplishments. If the candidate is fresh out of university they can't point to work projects in industry because they don't have any. But they CAN point to stuff they've done on their own. That shows both motivation and the ability to finish something. Why do you object to it?anticapitalist -> peter nelson , 21 Sep 2017 14:47Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst.John Kendall , 21 Sep 2017 19:42There is a big difference between "coding" and programming. Coding for a smart phone app is a matter of calling functions that are built into the device. For example, there are functions for the GPS or for creating buttons or for simulating motion in a game. These are what we used to call subroutines. The difference is that whereas we had to write our own subroutines, now they are just preprogrammed functions. How those functions are written is of little or no importance to today's coders.Game Cabbage -> theindyisbetter , 21 Sep 2017 19:40Nor are they able to program on that level. Real programming requires not only a knowledge of programming languages, but also a knowledge of the underlying algorithms that make up actual programs. I suspect that "coding" classes operate on a quite superficial level.
Its not about the amount of work or the amount of labor. Its about the comparative availability of both and how that affects the balance of power, and that in turn affects the overall quality of life for the 'majority' of people.c mm -> Ed209 , 21 Sep 2017 19:39Most of this is not true. Peter Nelson gets it right by talking about breaking steps down and thinking rationally. The reason you can't just teach the theory, however, is that humans learn much better with feedback. Think about trying to learn how to build a fast car, but you never get in and test its speed. That would be silly. Programming languages take the system of logic that has been developed for centuries and gives instant feedback on the results. It's a language of rationality.peter nelson -> peterainbow , 21 Sep 2017 19:37This article is about the US. The tech industry in the EU is entirely different, and basically moribund. Where is the EU's Microsoft, Apple, Google, Amazon, Oracle, Intel, Facebook, etc, etc? The opportunities for exciting interesting work, plus the time and schedule pressures that force companies to overlook stuff like age because they need a particular skill Right Now, don't exist in the EU. I've done very well as a software engineer in my 60's in the US; I cannot imagine that would be the case in the EU.peterainbow -> peter nelson , 21 Sep 2017 19:37sorry but that's just not true, i doubt you are really programming still, or quasi programmer but really a manager who like to keep their hand in, you certainly aren't busy as you've been posting all over this cif. also why would you try and hire someone with such disparate skillsets, makes no sense at allc mm -> peterainbow , 21 Sep 2017 19:36oh and you'd be correct that i do have workplace issues, ie i have a disability and i also suffer from depression, but that shouldn't bar me from employment and again regarding my skills going stale, that again contradicts your statement that it's about planning/analysis/algorithms etc that you said above ( which to some extent i agree with )
Not at all, it's really egalitarian. If I want to hire someone to paint my portrait, the best way to know if they're any good is to see their previous work. If they've never painted a portrait before then I may want to go with the girl who hasc mm -> ragingbull , 21 Sep 2017 19:34There is definitely not an excess. Just look at projected jobs for computer science on the Bureau of Labor statistics.c mm -> perble conk , 21 Sep 2017 19:32Right? It's ridiculous. "Hey, there's this industry you can train for that is super valuable to society and pays really well!"peterainbow -> peter nelson , 21 Sep 2017 19:29
Then Ben Tarnoff, "Don't do it! If you do you'll drive down wages for everyone else in the industry. Build your fire starting and rock breaking skills instead."how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but factspeter nelson -> eirsatz , 21 Sep 2017 19:25I think the difference between gifted and not is motivation. But I agree it's not innate. The kid who stayed up all night in high school hacking into the school server to fake his coding class grade is probably more gifted than the one who spent 4 years in college getting a BS in CS because someone told him he could get a job when he got out.peter nelson -> TheBananaBender , 21 Sep 2017 19:20I've done some hiring in my life and I always ask them to tell me about stuff they did on their own.
peter nelson -> Ed209 , 21 Sep 2017 19:19Most coding jobs are bug fixing.
The only bugs I have to fix are the ones I make.
As several people have pointed out, writing a computer program requires analyzing and breaking down a task into steps, identifying interdependencies, prioritizing the order, figuring out what parts can be organized into separate tasks that be done separately, etc.peter nelson -> ragingbull , 21 Sep 2017 19:14These are completely independent of the language - I've been programming for 40 years in everything from FORTRAN to APL to C to C# to Java and it's all the same. Not only that but they transcend programming - they apply to planning a vacation, remodeling a house, or fixing a car.
Neither coding nor having a bachelor's degree in computer science makes you a suitable job candidate. I've done a lot of recruiting and interviews in my life, and right now I'm trying to hire someone. And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that.peter nelson -> nickGregor , 21 Sep 2017 19:07That's the thing that distinguishes software from many other fields - you can do something real and significant on your own. If you haven't managed to do so in 4 years of college you're not a good candidate.
Ricardo111 -> martinusher , 21 Sep 2017 19:03Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it.In a sense that's already true, as i noted elsewhere. 90% of the code in my projects (Java and C# in their respective IDEs) is machine generated. I do relatively little "coding". But the flaw in your idea is this: most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe.
Completely agree. At the highest levels there is more work that goes into managing complexity and making sure nothing is missed than in making the wheels turn and the beepers beep.ragingbull , 21 Sep 2017 19:02Hang on... if the current excess of computer science grads is not driving down wages, why would training more kids to code make any difference?Ricardo111 -> youngsteveo , 21 Sep 2017 18:59I've actually interviewed people for very senior technical positions in Investment Banks who had all the fancy talk in the world and yet failed at some very basic "write me a piece of code that does X" tests.eirsatz -> Ricardo111 , 21 Sep 2017 18:57Next hurdle on is people who have learned how to deal with certain situations and yet don't really understand how it works so are unable to figure it out if you change the problem parameters.
That said, the average coder is only slightly beyond this point. The ones who can take in account maintenability and flexibility for future enhancements when developing are already a minority, and those who can understand the why of software development process steps, design software system architectures or do a proper Technical Analysis are very rare.
Hubris. It's easy to mistake efficiency born of experience as innate talent. The difference between a 'gifted coder' and a 'non gifted junior coder' is much more likely to be 10 or 15 years sitting at a computer, less if there are good managers and mentors involved.Ed209 , 21 Sep 2017 18:57Politicians love the idea of teaching children to 'code', because it sounds so modern, and nobody could possible object... could they? Unfortunately it simply shows up their utter ignorance of technical matters because there isn't a language called 'coding'. Computer programming languages have changed enormously over the years, and continue to evolve. If you learn the wrong language you'll be about as welcome in the IT industry as a lamp-lighter or a comptometer operator.peter nelson -> YEverKnot , 21 Sep 2017 18:54The pace of change in technology can render skills and qualifications obsolete in a matter of a few years, and only the very best IT employers will bother to retrain their staff - it's much cheaper to dump them. (Most IT posts are outsourced through agencies anyway - those that haven't been off-shored. )
And this isn't even a good conspiracy theory; it's a bad one. He offers no evidence that there's an actual plan or conspiracy to do this. I'm looking for an account of where the advocates of coding education met to plot this in some castle in Europe or maybe a secret document like "The Protocols of the Elders of Google", or some such.TheBananaBender , 21 Sep 2017 18:52Most jobs in IT are shit - desktop support, operations droids. Most coding jobs are bug fixing.Ricardo111 -> Wiretrip , 21 Sep 2017 18:49Tool Users Vs Tool Makers. The really good coders actually get why certain things work as they do and can adjust them for different conditions. The mass produced coders are basically code copiers and code gluing specialists.peter nelson -> AmyInNH , 21 Sep 2017 18:49People who get Masters and PhD's in computer science are not usually "coders" or software engineers - they're usually involved in obscure, esoteric research for which there really is very little demand. So it doesn't surprise me that they're unemployed. But if someone has a Bachelor's in CS and they're unemployed I would have to wonder what they spent their time at university doing.Ricardo111 , 21 Sep 2017 18:44The thing about software that distinguishes it from lots of other fields is that you can make something real and significant on your own . I would expect any recent CS major I hire to be able to show me an app or an open-source component or something similar that they made themselves, and not just test scores and grades. If they could not then I wouldn't even think about hiring them.
Fortunately for those of us who are actually good at coding, the difference in productivity between a gifted coder and a non-gifted junior developer is something like 100-fold. Knowing how to code and actually being efficient at creating software programs and systems are about as far apart as knowing how to write and actually being able to write a bestselling exciting Crime trilogy.peter nelson -> jamesupton , 21 Sep 2017 18:36peter nelson -> Julian Williams , 21 Sep 2017 18:34The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.
If you know how to write software you can get a robot to do those things.
I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security.YEverKnot , 21 Sep 2017 18:32This article is about coding; most of those jobs require very little of that.
Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.
How do you define "high paying". Everyone I know (and I know a lot because I've been a sw engineer for 40 years) who is working fulltime as a software engineer is making a high-middle-class salary, and can easily afford a home, travel on holiday, investments, etc.
freeandfair -> WithoutPurpose , 21 Sep 2017 18:31Nowt like a good conspiracy theory.Tech's push to teach coding isn't about kids' success – it's about cutting wages
What is a stupidly low salary? 100K?freeandfair -> AmyInNH , 21 Sep 2017 18:30> Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.peter nelson -> edmundberk , 21 Sep 2017 18:30That just means 50% of them are no good and need to develop their skills further or try something else.
Not every with a STEM degree from some 3rd rate college is capable of doing complex IT or STEM work.AmyInNH -> peter nelson , 21 Sep 2017 18:27So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?
Yes. Haven't you noticed how wage growth has flattened? That's because some do-gooders" thought it would be a fine idea to educate the peasants. There was a time when only the well-to do knew how to read and write, and that's why they well-to-do were well-to-do. Education is evil. Stop educating people and then those of us who know how to read and write can charge them for reading and writing letters and email. Better yet, we can have Chinese and Indians do it for us and we just charge a transaction fee.
Massive amounts of public use cars, it doesn't mean millions need schooling in auto mechanics. Same for software coding. We aren't even using those who have Bachelors, Masters and PhDs in CS.carlospapafritas , 21 Sep 2017 18:27"..importing large numbers of skilled guest workers from other countries through the H1-B visa program..."freeandfair -> whitehawk66 , 21 Sep 2017 18:27"skilled" is good. H1B has long ( appx 17 years) been abused and turned into trafficking scheme. One can buy H1B in India. Powerful ethnic networks wheeling & dealing in US & EU selling IT jobs to essentially migrants.
The real IT wages haven't been stagnant but steadily falling from the 90s. It's easy to see why. $82K/year IT wage was about average in the 90s. Comparing the prices of housing (& pretty much everything else) between now gives you the idea.
> not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced breadJames Dey , 21 Sep 2017 18:25Taking a couple of years of programming are not enough to do this as a job, don't worry.
But learning to code is like learning maths, - it helps to develop logical thinking, which will benefit you in every area of your life.We should stop teaching our kids to be journalists, then your wage might go up.peter nelson -> AmyInNH , 21 Sep 2017 18:23What does this even mean?
www.moonofalabama.org
David McCaul -> IanMcLzzz , 21 Sep 2017 13:03There are very few professional Scribes nowadays, a good level of reading & writing is simplely a default even for the lowest paid jobs. A lot of basic entry level jobs require a good level of Excel skills. Several years from now basic coding will be necessary to manipulate basic tools for entry level jobs, especially as increasingly a lot of real code will be generated by expert systems supervised by a tiny number of supervisors. Coding jobs will go the same way that trucking jobs will go when driverless vehicles are perfected.anticapitalist, 21 Sep 2017 14:25
Offer the class but not mandatory. Just like I could never succeed playing football others will not succeed at coding. The last thing the industry needs is more bad developers showing up for a paycheck.Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. What's next, keep them off Math, because, you know . .Taylor Dotson -> freeandfair , 21 Sep 2017 13:59That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?PolydentateBrigand , 21 Sep 2017 12:59The economy isn't a zero-sum game. Developing a more skilled workforce that can create more value will lead to economic growth and improvement in the general standard of living. Talented coders will start new tech businesses and create more jobs.What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.mlzarathustra , 21 Sep 2017 16:52But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?
Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.
I agree with the basic point. We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money.AmyInNH -> peter nelson , 21 Sep 2017 16:51The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the bucks. And smartphone-obsessed millennials have too short an attention span to fathom how empty their lives are, devoid of the aesthetic depth as they are.
I can't draw a definite link, but I think algorithm fails, which are based on fanatical reliance on programmed routines as the solution to everything, are rooted in the shortage of education and cultivation in the arts.
Economics is a social science, and all this is merely a reflection of shared cultural values. The problem is, people think it's math (it's not) and therefore set in stone.
Geeze it'd be nice if you'd make an effort.peter nelson -> WyntonK , 21 Sep 2017 16:45
rucore.libraries.rutgers.edu/rutgers-lib/45960/PDF/1/
https://rucore.libraries.rutgers.edu/rutgers-lib/46156 /
https://rucore.libraries.rutgers.edu/rutgers-lib/46207 /Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled.Jared Hall -> RogTheDodge , 21 Sep 2017 16:42But it's not. I'm in my 60's and retiring but I've been a software engineer all my life. I've worked for many different companies, and in different industries and I've never had any trouble competing with cheap imported workers. The people I've seen fall behind were ones who did not keep their skills fresh. When I was laid off in 2009 in my mid-50's I made sure my mobile-app skills were bleeding edge (in those days ANYTHING having to do with mobile was bleeding edge) and I used to go to job interviews with mobile devices to showcase what I could do. That way they could see for themselves and not have to rely on just a CV.
They older guys who fell behind did so because their skills and toolsets had become obsolete.
Now I'm trying to hire a replacement to write Android code for use in industrial production and struggling to find someone with enough experience. So where is this oversupply I keep hearing about?
Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely.JayThomas , 21 Sep 2017 16:39FabBlondie -> RogTheDodge , 21 Sep 2017 16:39It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.
We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? That just seems inefficient.
There was never any need to give our jobs to foreigners. That is, if you are comparing the production of domestic vs. foreign workers. The sole need was, and is, to increase profits.peter nelson -> AmyInNH , 21 Sep 2017 16:34Link?FabBlondie , 21 Sep 2017 16:34Schools MAY be able to fix big social problems, but only if they teach a well-rounded curriculum that includes classical history and the humanities. Job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training. The existing social problems were not caused by a lack of programmers, and cannot be solved by Big Tech.peter nelson -> Jared Hall , 21 Sep 2017 16:31I agree with the author that computer programming skills are not that limited in availability. Big Tech solved the problem of the well-paid professional some years ago by letting them go, these were mostly workers in their 50s, and replacing them with H1-B visa-holders from India -- who work for a fraction of their experienced American counterparts.
It is all about profits. Big Tech is no different than any other "industry."
Supply of apples does not affect the demand for oranges. Teaching coding in high school does not necessarily alter the supply of software engineers. I studied Chinese History and geology at University but my doing so has had no effect on the job prospects of people doing those things for a living.johnontheleft -> Taylor Dotson , 21 Sep 2017 16:30You would be surprised just how much a little coding knowledge has transformed my ability to do my job (a job that is not directly related to IT at all).peter nelson -> Jared Hall , 21 Sep 2017 16:29Because teaching coding does not affect the supply of actual engineers. I've been a professional software engineer for 40 years and coding is only a small fraction of what I do.peter nelson -> Jared Hall , 21 Sep 2017 16:28You and the linked article don't know what you're talking about. A CS degree does not equate to a productive engineer.freeluna -> freeluna , 21 Sep 2017 16:18A few years ago I was on the recruiting and interviewing committee to try to hire some software engineers for a scientific instrument my company was making. The entire team had about 60 people (hw, sw, mech engineers) but we needed 2 or 3 sw engineers with math and signal-processing expertise. The project was held up for SIX months because we could not find the people we needed. It would have taken a lot longer than that to train someone up to our needs. Eventually we brought in some Chinese engineers which cost us MORE than what we would have paid for an American engineer when you factor in the agency and visa paperwork.
Modern software engineers are not just generic interchangable parts - 21st century technology often requires specialised scientific, mathematical, production or business domain-specific knowledge and those people are hard to find.
...also, this article is alarmist and I disagree with it. Dear Author, Phphphphtttt! Sincerely, freelunaAmyInNH , 21 Sep 2017 16:16Regimentation of the many, for benefit of the few.AmyInNH -> Whatitsaysonthetin , 21 Sep 2017 16:15Visa jobs are part of trade agreements. To be very specific, US gov (and EU) trade Western jobs for market access in the East.jigen , 21 Sep 2017 16:13
http://www.marketwatch.com/story/in-india-british-leader-theresa-may-preaches-free-trade-2016-11-07
There is no shortage. This is selling off the West's middle class.
Take a look at remittances in wikipedia and you'll get a good idea just how much it costs the US and EU economies, for sake of record profits to Western industry.And thanks to the author for not using the adjective "elegant" in describing coding.freeluna , 21 Sep 2017 16:13I see advantages in teaching kids to code, and for kids to make arduino and other CPU powered things. I don't see a lot of interest in science and tech coming from kids in school. There are too many distractions from social media and game platforms, and not much interest in developing tools for future tech and science.jigen , 21 Sep 2017 16:13Let the robots do the coding. Sorted.FluffyDog -> rgilyead , 21 Sep 2017 16:13Although coding per se is a technical skill it isn't designing or integrating systems. It is only a small, although essential, part of the whole software engineering process. Learning to code just gets you up the first steps of a high ladder that you need to climb a fair way if you intend to use your skills to earn a decent living.rebel7 , 21 Sep 2017 16:11BS.AmyInNH -> WyntonK , 21 Sep 2017 16:11Friend of mine in the SV tech industry reports that they are about 100,000 programmers short in just the internet security field.
Y'all are trying to create a problem where there isn't one. Maybe we shouldn't teach them how to read either. They might want to work somewhere besides the grill at McDonalds.
To which they will respond, offshore.AmyInNH -> MrFumoFumo , 21 Sep 2017 16:10They're not looking for good, they're looking for cheap + visa indentured. Non-citizens.nickGregor , 21 Sep 2017 16:09Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it. Coding is going to change from its purely abstract form that is not utilized at peak- but if you can describe what you envision in an effective concise manner u could become a very good coder very quickly -- and competence will be determined entirely by imagination and the barriers of entry will all but be extinctAmyInNH -> unclestinky , 21 Sep 2017 16:09Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.AmyInNH -> User10006 , 21 Sep 2017 16:06Apparently a whole lot of people are just making it up, eh?JCA1507 -> whitehawk66 , 21 Sep 2017 16:04
http://www.motherjones.com/politics/2017/09/inside-the-growing-guest-worker-program-trapping-indian-students-in-virtual-servitude /
From today,
http://www.computerworld.com/article/2915904/it-outsourcing/fury-rises-at-disney-over-use-of-foreign-workers.html
All the way back to 1995,
https://www.youtube.com/watch?v=vW8r3LoI8M4&feature=youtu.beBravoJCA1507 -> DirDigIns , 21 Sep 2017 16:01Total... utter... no other way... huge... will only get worse... everyone... (not a very nuanced commentary is it).RogTheDodge -> WithoutPurpose , 21 Sep 2017 16:01I'm glad pieces like this are mounting, it is relevant that we counter the mix of messianism and opportunism of Silicon Valley propaganda with convincing arguments.
That's not my experience.AmyInNH -> TTauriStellarbody , 21 Sep 2017 16:01It's a stall tactic by Silicon Valley, "See, we're trying to resolve the [non-existant] shortage."AmyInNH -> WyntonK , 21 Sep 2017 16:00They aren't immigrants. They're visa indentured foreign workers. Why does that matter? It's part of the cheap+indentured hiring criteria. If it were only cheap, they'd be lowballing offers to citizen and US new grads.RogTheDodge -> Jared Hall , 21 Sep 2017 15:59No. Because they're the ones wanting them and realizing the US education system is not producing enoughRogTheDodge -> Jared Hall , 21 Sep 2017 15:58Except the demand is increasing massively.RogTheDodge -> WyntonK , 21 Sep 2017 15:57That's why we are trying to educate American coders - so we don't need to give our jobs to foreigners.AmyInNH , 21 Sep 2017 15:56Correct premises,Oldvinyl , 21 Sep 2017 15:51
- proletarianize programmers
- many qualified graduates simply can't find jobs.
Invalid conclusion:
- The problem is there aren't enough good jobs to be trained for.That conclusion only makes sense if you skip right past ...
" importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status"Hiring Americans doesn't "hurt" their record profits. It's incessant greed and collusion with our corrupt congress.
This column was really annoying. I taught my students how to program when I was given a free hand to create the computer studies curriculum for a new school I joined. (Not in the UK thank Dog). 7th graders began with studying the history and uses of computers and communications tech. My 8th grade learned about computer logic (AND, OR, NOT, etc) and moved on with QuickBASIC in the second part of the year. My 9th graders learned about databases and SQL and how to use HTML to make their own Web sites. Last year I received a phone call from the father of one student thanking me for creating the course, his son had just received a job offer and now works in San Francisco for Google.WyntonK -> DirDigIns , 21 Sep 2017 15:47
I am so glad I taught them "coding" (UGH) as the writer puts it, rather than arty-farty subjects not worth a damn in the jobs market.I live and work in Silicon Valley and you have no idea what you are talking about. There's no shortage of coders at all. Terrific coders are let go because of their age and the availability of much cheaper foreign coders(no, I am not opposed to immigration).Sean May , 21 Sep 2017 15:43Looks like you pissed off a ton of people who can't write code and are none to happy with you pointing out the reason they're slinging insurance for geico.Jared Hall -> User10006 , 21 Sep 2017 15:43I think you're quite right that coding skills will eventually enter the mainstream and slowly bring down the cost of hiring programmers.
The fact is that even if you don't get paid to be a programmer you can absolutely benefit from having some coding skills.
There may however be some kind of major coding revolution with the advent of quantum computing. The way code is written now could become obsolete.
Why is it a fantasy? Does supply and demand not apply to IT labor pools?Jared Hall -> ninianpark , 21 Sep 2017 15:42Why is it a load of crap? If you increase the supply of something with no corresponding increase in demand, the price will decrease.pictonic , 21 Sep 2017 15:40A well-argued article that hits the nail on the head. Amongst any group of coders, very few are truly productive, and they are self starters; training is really needed to do the admin.Jared Hall -> DirDigIns , 21 Sep 2017 15:39There is not a huge skills shortage. That is why the author linked this EPI report analyzing the data to prove exactly that. This may not be what people want to believe, but it is certainly what the numbers indicate. There is no skills gap.Axel Seaton -> Jaberwocky , 21 Sep 2017 15:34http://www.epi.org/files/2013/bp359-guestworkers-high-skill-labor-market-analysis.pdf
Yeah, but the money is crapDirDigIns -> IanMcLzzz , 21 Sep 2017 15:32Perfect response for the absolute crap that the article is pushing.DirDigIns , 21 Sep 2017 15:30Total and utter crap, no other way to put it.Whatitsaysonthetin -> Evelita , 21 Sep 2017 15:27There is a huge skills shortage in key tech areas that will only get worse if we don't educate and train the young effectively.
Everyone wants youth to have good skills for the knowledge economy and the ability to earn a good salary and build up life chances for UK youth.
So we get this verbal diarrhoea of an article. Defies belief.
Yes. China and India are indeed training youth in coding skills. In order that they take jobs in the USA and UK! It's been going on for 20 years and has resulted in many experienced IT staff struggling to get work at all and, even if they can, to suffer stagnating wages.WmBoot , 21 Sep 2017 15:23Wow. Congratulations to the author for provoking such a torrent of vitriol! Job well done.TTauriStellarbody , 21 Sep 2017 15:22Has anyones job is at risk from a 16 year old who can cobble together a couple of lines of javascript since the dot com bubble?freeandfair -> youngsteveo , 21 Sep 2017 13:27Good luck trying to teach a big enough pool of US school kids regular expressions let alone the kind of test driven continuous delivery that is the norm in the industry now.
> A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the jobI have exactly the same experience. There is undeniable a skill gap. It takes about a year for a skilled professional to adjust and learn enough to become productive, it takes about 3-5 years for a college grad.
It is nothing new. But the issue is, as the college grad gets trained, another company steal him/ her. And also keep in mind, all this time you are doing job and training the new employee as time permits. Many companies in the US cut the non-profit department (such as IT) to the bone, we cannot afford to lose a person and then train another replacement for 3-5 years.
The solution? Hire a skilled person. But that means nobody is training college grads and in 10-20 years we are looking at the skill shortage to the point where the only option is brining foreign labor.
American cut-throat companies that care only about the bottom line cannibalized themselves.
farabundovive -> Ethan Hawkins , 21 Sep 2017 15:10
UncommonTruthiness , 21 Sep 2017 14:10Given how shit most coders of my acquaintance have been - especially in matters of work ethic, logic, matching s/w to user requirements and willingness to test and correct their gormless output - most future coding work will probably be in the area of disaster recovery. Sorry, since the poor snowflakes can't face the sad facts, we have to call it "business continuation" these days, don't we?Heh. You are not a coder, I take it. :) Going to be a few decades before even the easiest coding jobs vanish.
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.
Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.
Having great developers means creating a great environment. In an increasingly competitive world, that means everything from free food to paid screw-off time. But not everyone has gotten the message.
Some places still practice developer abuse. Here are its many forms. Do not indulge in more than one or two, or you may never see your best developers again.
[ 10 steps to becoming the developer everyone wants [1] | Learn how to work smarter, not harder with InfoWorld's roundup of all the tips and trends programmers need to know in the Developers' Survival Guide [2]. Download the PDF today! | Keep up with the latest developer news with InfoWorld's Developer World newsletter [3]. ]
1. Hellish security
I've been to a place whose McAfee proxy bans Zip files with HelloWorld.java. This means that everything from downloading build tools to examples is prohibited. At another shop, the McAfee desfktop security scans every file a process touches for malicious code, even files unchanged since the last time it checked them in a single-threaded fashion, which means putting the entire contents of thousands of files through one core of the CPU for every operation. It took 30 minutes to launch the IDE and up to another 10 minutes to launch a build, even if the build touched only three source files and ran for a few seconds.2. Torture tools
There is Subversion, and there is Git [4]. Frankly, all other version control/configuration management tools are way too slow and/or painful. ClearCase is the mother of all developer torture tools. One ... day ... the ... code ... will ... check ... out ...3. Maintenance teams
Some places still have fixed teams, which get all the sucky work. Seriously, no one will stay on the "maintenance team" once they find a better job -- and the odds are on their side.4. Forced Windows
Forcing your developers to use Windows as a development environment if they aren't writing .Net code is pretty sadistic. Forced Windows means feeding your developers the same crap nontechnical users are forced to run, with many of the same restrictions. (I realize that someone on my team will say I forced them to use Linux. That's really too bad.)5. Locking out all libraries
Years ago, when I worked at IBM, I was told not to use third-party libraries -- open source or not -- unless it would save me at least two months of development time because the hour or two of lawyer time necessary to vet everything would cost more than two months of my time. I upped my hourly billing rate soon after. Sure, you need a policy stipulating where and how you will consume libraries without going through a formal vetting process, but even so, "optimistic locking" is usually fine. Otherwise you're committing a heinous act of developer abuse by forcing everyone to reinvent the wheel.6. WebSphere
Look, I can live with DB2 now that IBM finally addedREAD_COMMITTED
. But WebSphere is too painful. Don't make me boot WebSphere or go through the 50 GUI screens necessary to deploy a WAR file. Seriously, WebSphere is developer abuse pure and simple. It is bad. It has always been bad. If you take nothing else away from this, uninstall WebSphere. Otherwise, your developers will hate on you behind your back. If they don't, you need better developers.7. Marching unto death
If you force your developers into a constant death march, you burn them out. Also, they feel like they've put everything together with paperclips and duct tape. A maintenance sprint is not the answer because no one wants to clean up a big mess for weeks on end. Instead, give your developers slightly less time than they want. Anything more will produce cost overruns, and anything less is developer abuse.8. Ah, 2010 was a good year
So you really like "standardizing" on old stuff -- say, a three- to four-year-old version of an IDE running on a version of the operating system (probably Windows) it wasn't ever tested on. In a competitive labor market, you can bet someone will offer them full benefits and the chance to use a newer version.9. I'm sorry, we only do boring
I sympathize that you hate HipsterHacker [5] architecture. You don't want a system that was developed entirely in "oh look shiny let me download that too." On the other hand, making your developers write code in the same set of stuff with nothing new under the sun makes them think about their career. Mix it up, let your developers touch new stuff, and mitigate the risk into a stable architecture.10. The VM of the eternal hourglass
Your company is totally virtual! Even the development environments are virtual! Cool beans, man! Oh, but you underfunded the hardware or didn't tune it right, so everything is slow, horrible torture. Some people are Zen and don't mind a two-hour-long build process. I am not one of those people.11. The faux-scrum daily standup meeting
There's a special level of hell reserved for the worst sinners. It's known as the daily scrum meeting for management status updates, where everyone feels compelled to talk for at least five to six minutes, not to convey important developments, but to communicate to management that they're busy doing stuff and should stay employed. The meetings inevitably have 12 or more people in them, the vast majority of whom don't need to be present. They run for 30 to 45 minutes or more (a real scrum should take about five to six minutes), and everyone not speaking spaces out. Worse, no one does any work before these meetings because they know the context switch is coming. After the meeting, well, lunch is in another hour anyway, so why start anything hard now? Basically these meetings cost your team their entire morning, poison morale, and accomplish nothing. Learn to do agile right or cancel the meeting.12. Formalism
Formalism is more than wearing a suit and can show up in unexpected ways. Ironically, casual environments are more difficult because establishing the line is harder, but developers have fought long and hard against formal environments. When I worked for JBoss, three directors of development in a row told me I shouldn't wear a dress shirt-and-slacks combo, and they would pay for me to buy decent clothes at Target. They were dead serious. The concern was that if JBoss (a perceived sandal brigade) showed up in a suit, management would eventually force the developers back into the penguin costume. The casual environment was, in fact, a synthetic construct. I tried setting the bar recently when hiring an HR manager who immediately asked, "Does this mean we can't say [the F word] anymore?" I replied, "No, [the F word] is a holy word, and we will utter it at any time, any place, and for any or no reason whatsoever ... unless customers are around."There is a divide between the button-down environments and the no-button environments that crosses in to how people talk (such as dongle jokes [6]) and how people think ("makes the most technical sense" versus "isn't in my territory"). Some of this is due to larger environments requiring more rules; other times, people love ceremony and excessive "formalism" in procedure, speech, dress, and more. Forcing developers to do everything inside the box for the sake of formality is the abuse of the mind and spirit.
13. Management by hostage crisis
Sometimes a load test will fail, and management may want to hear the root cause and a solution. They may even threaten to revert the changes, even if it breaks the implementation. This is a perfect path to knocking the development process off-track. Micromanaging and asserting authority from up high not only interrupts the normal iterative process of implementation and testing; it also makes developers afraid to try anything and attract unwanted attention. Threatening drastic and immediate destructive action to resolve problems without understanding the related functionality leads to a rushed product at best. Putting a project at the mercy of panicked customers or managers assures developers that the situation is out of their control, but they will be blamed for the outcome despite their warnings. Goals and deadlines guide work to completion. A thrashing whirlwind destroys it.14. We ask the questions around here
Let's say someone finds a rogue machine trying to connect to Skype on a restricted port. The developer is unaware this violates the rules. But when asked about other guidelines, no details are provided. Congratulations -- you've just punished a developer for failing to adhere to vague, unannounced, or undocumented restrictions. Don't be surprised if this leaves them looking for the nearest exit.15. Details, baby, details
Pointy-haired boss: "The customer asked for a tweak. Can you add that in time for release?"Coder: "No. That would require a major architectural change. We asked before we started out, and we were told not to spend the time making that sort of expansion possible."
Though requirements are rarely set in stone at the outset, pressing developers to take the shortest path to them without accommodating for likely changes puts them in a tight spot when the demands come later.
16. Never mind how it works, just tell me how it works
Some managers demand immediate solutions to problems while simultaneously refusing to entertain hypothesizing about the causes and resisting efforts to investigate properly. It usually comes with verbal challenges to the effect of, "Aren't you an expert? Why can't you just explain or solve it?" To find a solution, you must investigate the causes, as well as hypothesize and test those hypotheses. No, we can't fast-forward to the end -- our problem-solving methodology will devolve into guess-and-check!This article, "16 ways to torture developers [7]," was originally published at InfoWorld.com [8]. Keep up on the latest developments in application development [9] and read more of Andrew Oliver's Strategic Developer blog [10] at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter [11].
- Management has renamed its Waterfall process to Agile Waterfall
- You start hiring consultants so they can take the blame
- The Continuous Integration server has returned the error message "Screw it, I give up"
- You have implemented your own Ruby framework that uses XML configuration files
- Your eldest team member references Martin Fowler as a 'snot-nosed punk'
- Your source code control system is a series of folders on a shared drive
- Allocated QA time is for Q and A why your crap is broken
- All of your requirements are written on a used cocktail napkin
- You start considering a new job so you don't have to maintain the application you are building
- The lead web developer thinks the X in XHTML means 'extreme'
- Ever iteration meeting starts with "Do you want the good news or the bad news…"
- Your team still gives a crap about its CMM Level
- Progress is now measured by the number of fixed bugs and not completed features
- Continuous Integration is getting new employees to read the employee handbook
- You are friends with the janitor
- The SCRUM master doesn't really care what you did yesterday or what you will do today
- Every milestone ends in a dead sprint
- Your best developer only has his A+ Certification
- You do not understand the acronyms DRY, YAGNI, or KISS; but you do understand WTF, PHB, and FUBAR
- Your manager could be replaced by an email redirection batch file
- The only certification your software process has is ISO 9001/2000
- Your manager thinks 'Metrics' is a type of protein drink
- Every bug is prioritized as Critical
- Every feature is prioritized as Trivial
- Project estimates magically match the budget
- Developers use the excuse of 'self documenting code' for no comments
- Your favorite software pattern is God Object
- You still believe compiling is a form of testing
- Developers still use Notepad as an IDE
- Your manager wastes 7 hours a week asking for progress reports (true story)
- You do not have your own machine and you are not doing pair programming
- Team Rule – No meetings until 10 AM since we were all here until 2 AM
- Your team believes ORM is a 'fad'
- Your team believes the transition from VB6 to VB.NET will be 'seamless'
- Your manager thinks MS Project is the best management tool the market offers
- Your spouse only gets to see you on a webcam
- None of your unit tests have asserts in them
- FrontPage is your web page editor of choice
- You get into flame wars if { should be on new line, but you are impartial to patterns such as MVC
- The company motto is 'Do more with less'
- The phrase 'It works on my machine' is heard more than once a day
- The last conference your .NET team attended was Apple WWDC 2000
- Your manager insists that you track all activity but never uses the information to make decisions
- All debugging occurs on the live server
- Your manager does not know how to check email
- Your manager thinks being SOX compliant means not working on baseball nights
- The company hires Senetor Ted Stevens to give your project kick-off inspiration speech
- The last book you read – Visual InterDev 6 Bible
- The overall budget is mistaken for your weekly Mountain Dew bill
- Your manager spends his lunch hour crying in his car (another true story)
- Your lead web developer defines AJAX as a cleaning product
- Your boss expects you to spend the next 2 days creating a purchase request for a $50 component
- The sales team decreased your estimates because they believe you can work faster
- Requirement – Rank #1 on Google
- Everyday you work until Midnight, everyday your boss leaves at 4:30
- Your manager loves to say "Why do the developers care? They get paid by the hour."
- The night shift at Starbucks knows you by name
- Management can not understand why anyone needs more than a single monitor
- Your development team only uses source control as a power failure backup system
- Developers are not responsible for any testing
- The team does not use SVN because they believe the merge algorithms are black voodoo magic
- Your white boards are mostly white (VersionOne)
- The client continually mistakes your burn-down chart for a burn-up chart
- The project code name is renamed to 'The Death March'
- Now it physically pains you to say the word – Yes
- Your teammates don't refactor, they refuctor
- To reward you for all of your overtime your boss purchases a new coffee maker
- Your project budget is entered in the company ledger as 'Corporate Overhead'
- You secretly outsource pieces of the project so you can blog at work
- A Change Control Board is created and your product isn't even its first alpha version
- Daily you consider breaking your fingers for the short term disability check
- The deadline has been renamed a 'milestone'…just like the last 'milestone'
- Your project managers 'open door' policy only applies between 5:01 PM – 7:59 AM
- Your boss argues "Why buy it when we can built it!"
- You bring beer to the office during your 2nd shift
- The project manager is spotted consulting a Ouija board
- You give misinformation to your teammates so you look better on your personal review
- All code reviews are scheduled a week before product launch
- Budget for testing exists as "if we have time"
- The client will only talk about the requirements after they receive a fixed estimation
- The boss does not find the humor in Dilbert
- You start noticing your boss's poker tells during planning poker
- You start wondering if working 2 shifts at Pizza Hut is a better career alternative
- All performance issues are resolved by getting larger machines
- The project has been demoted to being released as a permanent 'Beta' version
- Your car is towed from the office parking lot as it was thought to be abandoned
- The project manager likes to doodle during requirements gathering meetings
- You are using MOSS 2007
- Your SCRUM team consists of 1
- Your timesheet looks like a Powerball ticket
- The web developer thinks being 508 means looking good in her Levi Red Tabs
- You think you need Multiple Personality Disorder medication because you are Mort, Elvis, and Einstein
- Your manager substitutes professional consultant advice for a Magic 8 Ball
- You know exactly how many compile warnings cause an 'Out of Memory' exception in your IDE
- I have used IDE twice in this list and you still don't know what it stands for
- You have cut and pasted code from The Daily WTF
- Broken unit tests are deleted because they are obviously out of date
- You are sent to a conference to learn, but you skip sessions to go hunting for swag
- QA has nicknamed you Chief Off-By-One
- You have been 90% complete 90% of the time
- "Oh, oh, and I almost forgot. Ahh, I'm also gonna need you to go ahead and come in on Sunday, too… thanks"
Jeffrey Nonken:
- Any given program, once deployed, is already obsolete.
- It is easier to change the specification to fit the program than vice versa.
- If a program is useful, it will have to be changed.
- If a program is useless, it will have to be documented.
- Only ten percent of the code in any given program will ever execute.
- Software expands to consume all available resources.
- Any non-trivial program contains at least one error.
- The probability of a flawless demo is inversely proportional to the number of people watching, raised to the power of the amount of money involved.
- Not until a program has been in production for at least six months will its most harmful error be discovered.
- Undetectable errors are infinite in variety, in contrast to detectable errors, which by definition are limited.
- The effort required to correct an error increases exponentially with time.
- Program complexity grows until it exceeds the capabilities of the programmer who must maintain it.
- Any code of your own that you haven't looked at in months might as well have been written by someone else.
- Inside every small program is a large program struggling to get out.
- The sooner you start coding a program, the longer it will take.
- A carelessly planned project takes three times longer to complete than expected; a carefully planned project takes only twice as long.
- Adding programmers to a late project makes it later.
- A program is never less than 90% complete, and never more than 95% complete.
- If you automate a mess, you get an automated mess.
- Build a program that even a fool can use, and only a fool will want to use it.
- Users truly don't know what they want in a program until they use it.
#6 is actually an extension of Parkinson's Law. http://en.wikipedia.org/wiki/Parkinson's_law
#17 is covered in the Mythical Man Month. The author is serious.
#13: I've learned to comment my code extensively. I pretend that in six months somebody will have to work on the code again, and that it might be me. If it isn't me, I'll have to stop whatever I AM working on to answer questions.
It's true often enough to continue to motivate me.
I've gotten high praise for the quality of my commenting.
#15 is just a corollary of #6.
#19 is covered in Code Complete 2, in a fashion. He urges you to get your code working RIGHT before worrying about getting it working FAST. If you optimize bad code you'll end up with fast, bad code, and you'll just end up throwing away all the effort along with the optimized code.
Sorry if I seem to be picking on this list, I'm really not. It's a pretty good list and funny. It's just that after 3 decades of programming I've learned most of these laws the hard way. :)