Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
Oh I agree. I was just arguing your point that 99% of the time one doesn't write code in asm, so knowing asm is useless. Since it has uses beyond writing code in it.
Warp wrote:
Even when debugging a C/C++ program, my experience is different than yours. While knowing what the compiler is doing behind the scenes helps understanding what's going on and aids eg. in memory consumption optimization (as well as other optimizations, such as minimizing memory fragmentation), I don't see how it aids in debugging.
Let's put it this way, I've submitted over a dozen bug reports to GCC for code generation errors when different options are in use.
jlun2 wrote:
Search around the web for open-source projects written in the language of your choice (in this case, C++), and analyze the code to learn how other people program
I agree with Warp, this idea isn't a great one.
I'd also add that there's tons and tons of horrible code out there. You should not be looking at bad code as an ideal to hold up. A lot of the projects you pointed out are filled with a lot of bad stuff.
Patashu wrote:
For alternatives to C, microsoft has 'Managed C' where you can declare objects to be within managed memory and a garbage collector will be freed later. Much of the mental work in making a C program work is in handling memory yourself, so this helps a LOT.
Or you could just learn C++, and make use of the STL containers with RAII, and stop worrying about memory management or any other kind of resource leakage.
Then you could write classic C, and instead of using malloc(), use a vector or something else even more fitting. Where you need to deal with a resource, bite the bullet on using classes, wrap your functions in one, and make use of a deconstructor.
A lot of the so called modern versions of C (Java and friends) offer classes, but enforce an object oriented design. Perhaps such constraints make sense on huge projects, but it really makes programs which should be simple and straight forward more complex than they need to be.
Really a programmer should have a lot of tools in his or her toolbox, and learn which tool is the best for the job and make use of them. A lot of the more modern languages take the approach of offering hirer quality basic tools, but don't allow you to use every kind of tool.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
So many opinions here ^_^
I figure I'll throw in my own opinions, as well.
First off, I would recommend a top-down approach. Essentially, doing that allows better abstractions and provides more tools for you to work with. This increases your code flexibility, reduces the number of bugs, and reduces the time to shipping. This, in turn, makes your customers happier.
The only real reason to go down deeper is when we must. Java, C#, etc are high level languages and as such you cannot control their layout in memory, for instance. This has a huge impact on cache coherency. So if you need to increase performence, you need to go deeper down. But if you don't need to optimize as much (as modern computers), then there is no reason to do it.
As such, starting off with C, assembly or some other low-level language is a horrible way to start.
Now, going deeper is fine. It's not something to be scared off. It will make you understand more of how computers work and that in turn will help you write better programs. But it comes at a cost to use those languages, and you need to be aware of that.
So I'd say start off with C#. Java is a horrible language, in my opinion, so I would avoid it.
Starting off with Javascript is also a horrible decision. That language is horrible. It completely lacks type safety, and lacks tons of modern programming idoms. Microsoft recently introduced TypeScript, which looks promising, but it will take some time to see how well it fares. The concept is good, however.
Type safety is a very good thing that helps you find bugs earlier and easier. Thus, you WANT as much type safety as you can get without it becoming too much of a headache.
Learning from others code is a horrible way of learning of how to program. Partially because there are so many coding styles out there, some of them really horrible, and most importantly because there are tons of bad practices out there. You as a newbie do not understand what is considered good and bad practice. All languages have them, and in time you will learn, and until such times, actually learning from others code is usually a bad idea. Contributing to projects is a good thing, however.
Also, it is generally better to learn several languages than to try to master some few specific languages. Programming is a science, and as such, you should broaden your horizon by looking at several languages with different paradigms to learn concepts, how to code, and tons of other stuff. The rest is typically syntactic sugar, and that is more easily learned than concepts, algorithms, etc.
So after you learn C# or some other high level language, you can go down to C++, then C and assembly, if you wish. I'm mostly experienced with C++, being my favorite language, so I can recommend the book Accelerated C++ which is a very good introduction book for C++.
I would avoid C like the pest. The reason being that C++ offers everything C does and offers tons of other advantages that C cannot. Understandably, there are certain systems and platforms where C++ cannot be used, and for such reasons, knowing C is not a bad idea. But avoid using it unless you absolutely must.
The same goes for assembly language since that is generally completely unreadable.
Don't know if I missed something, but this is my opinion, anyway.
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
EEssentia wrote:
Starting off with Javascript is also a horrible decision. That language is horrible. It completely lacks type safety, and lacks tons of modern programming idoms.
That's not a flaw of JavaScript's.
There are type based languages and typeless languages. Most scripting languages are typeless.
Typeless languages are an absolute nightmare to deal with any kind of rigid data. Be it byte based algorithms such as hashing, encryption and so on, or emulating CPUs, or math where having a certain finite power of two precision is important.
On the other hand, typeless languages are very easy to dive into, remove a lot of the headache of choosing the right types which most people constantly get wrong in type based languages. They're also extremely good at passing data around, and free form data in bulk can easily be passed around, which is very important for networking applications.
So on the one hand, it's really a pain to do some precise and complex math on my data in a typeless language, and I need to do a lot of type safety checks that the language doesn't just have built in, it's really easy to work with data in bulk and pass it around.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
True, true. I dislike typeless languages in the first place. But in the end, Javascript falls into the typeless category, and as such, it becomes a horrible language in my view.
Not dealing with types can be easier to work with, true. But in the end, you will learn quite quickly that simply by providing strict typing, you will catch more bugs a lot faster, and it doesn't have to be a big project to realize it either!
Either way, I'm against typeless languages as they make life difficult. So it is my opinion that newbies should be exposed to strongly typed languages first in order to learn to deal with typing. When they later need to use a typeless language, they will learn just how much a pain it is.
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
In most cases where I need to do something math heavy in a typeless language, I'll write the code and C and link it in. Most typeless languages support that.
Unfortunately, running JavaScript in a browser, that's not really an option without plugins like Google's NaCl (which is not salt).
It is definitely important to understand the difference though, and it personally drives me crazy when I have to work with typeless language with anything beyond simple cases.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
Or you could just learn C++, and make use of the STL containers with RAII, and stop worrying about memory management or any other kind of resource leakage.
To be fair, though, even RAII isn't a bulletproof technique to avoid memory management problems. For example, it does not help at all with mistakes like dangling pointers, so one needs to be aware of such things even when using just STL containers and nothing else. (It's enough to eg. add elements to a std::vector in order to invalidate any existing iterators/pointers/references to a previous element. Accessing the element through such an invalidated pointer is undefined behavior, and an very hard-to-find bug.)
While RAII is really, really helpful in making programs safer and simpler, one still has to know the caveats and the ways to shoot oneself on the foot.
This is the reason why so many programmers prefer so-called "safe" languages (such as Java and C#) where such things just don't happen(*), even if it means that the program will be a bit slower than the equivalent C++ program would be.
(*) Null-pointer exceptions notwithstanding...
RAII works badly with cyclic data structures. And those tend to occur when two object depend on each other. Which is very common, even in sane designs.
I think you are confusing RAII with reference counting. Not the same thing.
(For example std::list as a doubly-linked list and it fully utilizes RAII, like any other STL data container. It has no cyclicity problems.)
Yes, you're right. I simply looked at the "cyclic dependencies" part and forgot the rest. Oops. Sorry about that :p
Yes, I am fully aware of what RAII and reference counting are.
Still, AFAIK, my point on cyclic dependencies still holds.
Still, AFAIK, my point on cyclic dependencies still holds.
In my experience the problems of reference counting are greatly exaggerated. They can happen, of course, but it's not very common in practice.
PHP uses reference counting for memory management (or at least did it for a long time; I haven't checked the absolutely latest versions) and while you can indeed create cyclic dependencies that end up in leaked objects that never get freed, it doesn't happen all that often. (Yes, you can come up with a ton of situations where it happens inadvertently, but given the amount of PHP code out there, they are not something that people write very often in practice.)
Objective-C (or at least Apple's variant of it using the Cocoa library) uses reference counting as well, and you can get cyclic references, but it happens rarely. (Although now with code blocks it can happen much more easily without you even noticing...)
Btw, some object never getting destroyed because of a cyclic dependency is not the only problem that can happen with reference counting. It's also possible for an object to be destroyed too soon (and then code accessing the destroyed object). Can you figure out how this could happen?
(And yes, it happens purely via the reference counting mechanism. I'm not talking about bypassing it and starting destroying objects manually and intentionally.)
Btw, some object never getting destroyed because of a cyclic dependency is not the only problem that can happen with reference counting. It's also possible for an object to be destroyed too soon (and then code accessing the destroyed object). Can you figure out how this could happen?
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
Warp wrote:
While RAII is really, really helpful in making programs safer and simpler, one still has to know the caveats and the ways to shoot oneself on the foot.
In the foot.
Warp wrote:
This is the reason why so many programmers prefer so-called "safe" languages (such as Java and C#) where such things just don't happen
I disagree. Many programmers prefer "safe" languages because certain bad features simply don't exist, and because of misconceptions about C++.
On the first point, while STL containers and techniques like RAII make make memory problems in C++ almost a non-issue, it is still "possible" to write bad code which doesn't handle memory properly (since new and malloc exist, and you're not forced to use destructors and so on). So even though using C++ correctly means you rarely if ever run into a memory problem (they do exist, but are virtually non-existent once you know what you're doing), languages like Java don't allow "incorrect" usage (of course there are pointer problems, but the advertising makes believe such doesn't exist). C++ is about trusting the developer. Java and friends is about trusting the language and your VM. Management generally prefers in trusting the latter.
On the second point, I heard over and over again, even dozens of times in the past few years lines like: "Java is better than C++, because Java offers a string class."
I even know someone who hasn't looked at C++ since 1994ish, who wrote a long essay last year on how outdated and useless C++ is, and his shock that people still use it. His essay consisted of several dozen points, not a single one of them true.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
No. It's also a circular situation, but slightly different. It happens, for example, like this:
Module X owns a (reference-counted) object of class Y. A member function of Y calls a function of X (and then does other things with itself). That function of X happens to drop the reference to object Y, causing it to be destroyed (because only X had a reference to it). As the execution returns to the function in Y, it will be operating on a prematurely destroyed object.
(This isn't a problem in a garbage-collected system because the GC engine sees that there's still a "'this' pointer" pointing to the object, so it will never destroy it prematurely.)
Nach wrote:
I disagree. Many programmers prefer "safe" languages because certain bad features simply don't exist, and because of misconceptions about C++.
Actually what you say there and what I said are not mutually exclusive things. Many programmers do indeed prefer "safe languages" for misinformed reasons. However, many other programmers prefer to do so even though they are more or less quite well aware of how C++ works and how it's used properly (yet still prefer using another language where you don't have to worry about those things.)
I know how to use C++ safely and effectively (during the past 10 years or so I don't remember having had a memory leak even once; I have had a couple of off-by-one access mistakes, and maybe a few other such errors, but nothing that directly relates to memory management per se), but I recognize perfectly well and accept that proper safe memory management in C++ necessitates more care and more design (that has to be done exclusively to make the memory management secure) that isn't necessary in other languages.
There are many issues that are not handled even when strictly adhering to RAII design (ie. using the so-called "rule of three" and so on). I mentioned dangling pointers as an example (ie. pointers that might end up pointing to a destroyed object, causing undefined behavior if it's dereferenced). In those other languages there is no such thing as a dangling pointer for the simple reason that all objects are garbage-collected: If something references an object, it won't be destroyed. While this usually comes with the price of increased memory consumption and somewhat slower execution speed, many programmers are ready to pay that price for the comfort and safety.
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
Warp wrote:
I know how to use C++ safely and effectively (during the past 10 years or so I don't remember having had a memory leak even once; I have had a couple of off-by-one access mistakes, and maybe a few other such errors, but nothing that directly relates to memory management per se), but I recognize perfectly well and accept that proper safe memory management in C++ necessitates more care and more design (that has to be done exclusively to make the memory management secure) that isn't necessary in other languages.
While these other languages have the trade off of increased overhead, and garbage collection many times in essence is "memory leaks" which are cleaned up periodically. Sometimes not even well. PHP has its share of memory leaks, as do various browsers with JavaScript, that never get cleaned up.
Warp wrote:
There are many issues that are not handled even when strictly adhering to RAII design (ie. using the so-called "rule of three" and so on). I mentioned dangling pointers as an example (ie. pointers that might end up pointing to a destroyed object, causing undefined behavior if it's dereferenced). In those other languages there is no such thing as a dangling pointer for the simple reason that all objects are garbage-collected: If something references an object, it won't be destroyed. While this usually comes with the price of increased memory consumption and somewhat slower execution speed, many programmers are ready to pay that price for the comfort and safety.
In C++, you should be using std::unique_ptr and std::shared_ptr to avoid a lot of the issues, and in the case you describe where you may point to something that is no longer there, it actually exists in Java as well - NullPointerException.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
This wasn't meant to be a long discussion about programming language flame wars >_>
I'm halfway through a great book on Python but I'm kind of hoping there's a better IDE to use than the one Python provides. As a noob I have a bad habit of forgetting indents, colons, mixing up variable types, etc and I'm hoping there's an IDE that's more forgiving with that (eg/ adding colons automatically after an if statement).
I don't think it has gotten to the point where it's a flame war yet. Let's hope it doesn't get there.
I think it is good with the discussion, as one can clearly see advantages and disadvantages with languages, implementations and paradigms--a good thing for any programmer!
In C++, you should be using std::unique_ptr and std::shared_ptr to avoid a lot of the issues
They help only if you allocate individual objects dynamically. They don't help if you have eg. a container of objects (such as std::vector or std::set), nor do they help with stack-based objects.
The vast majority of C++ programs don't need those for the simple reason that objects are usually handled by value, or you are using well-encapsulated data containers to handle them (such as the standard ones). The danger of dangling pointers is still there regardless, though.
Note that you don't even need to use literal pointers to get a dangling pointer. It's enough to use iterators. And they can happen inadvertently. One example situation is having a class like this:
Language: cpp
class Something
{
std::list<SomeType> elements;
std::list<SomeType>::iterator currentElement;
....
};
One easily forgets the so-called "rule of three" in the above class because nowhere in the class is there a 'new' or 'delete' (which is usually the instinctive trigger for an experienced C++ programmer to start worrying about the rule of three.)
(The problem in the above class happens if you copy/assign it, if it doesn't have the proper copy constructor and assignment operator defined.)
and in the case you describe where you may point to something that is no longer there, it actually exists in Java as well - NullPointerException.
That's not a pointer that points to a destroyed object (because it's impossible for an object to be destroyed in Java if there's a reference pointing to it). It happens when you use a null reference, which is completely different. In C++ accessing a destroyed object is undefined behavior and can cause debugging nightmares (and even disastrous bugs). It might not even cause problems every time, but only sporadically. In Java accessing a null reference is very much defined behavior and will not cause any havoc. (Sure, it's indicative of a bug, but it's not dangerous. It simply throws an exception.)
I don't think it has gotten to the point where it's a flame war yet. Let's hope it doesn't get there.
I think it is good with the discussion, as one can clearly see advantages and disadvantages with languages, implementations and paradigms--a good thing for any programmer!
Well the other thing is I can't follow half of what's being said since I obviously don't have ANY experience dealing with memory leaks and faulty pointers and whatever else is being discussed (it's mostly chinese to me).
Not that there's anything wrong with such a discussion, but the point of this topic was to help a noob such as myself learn how to program at a level where I can become financially independent. Not that my only reason for programming is mercenary (I LOVE coding) but I haz bills to pay.
I think it been discussed plenty of good choices to start.
Pick an entry language (read the discussion to decide), then master the basics, then delve into algorithms and data structures, then design patterns, then broaden your horizons with more languages, and so on.
Not that there's anything wrong with such a discussion, but the point of this topic was to help a noob such as myself learn how to program at a level where I can become financially independent. Not that my only reason for programming is mercenary (I LOVE coding) but I haz bills to pay.
Programming is not something that can be learned in a couple of weeks or even months (very much unlike people thought in the 80's and, unfortunately, some even today). It takes years. You won't be paying any bills in your near future with programming.
(Ok, of course there are some employers out there who have no idea whatsoever about programming and believe someone is a programmer just because they say so, and might even be one of those who think that programming is something that even a kid can learn in a few days(*), so you never know...)
(*) I think that many people in the 80's and 90's had, and some even today still do, even though probably to a lesser extent, the misconception that programming is something akin to, for example, writing a transcript of a company meeting: Just mechanical writing down of what you want. Something that anybody having the skills to write can learn easily and quickly.
I recommend getting the free version of Unity 3D and making a client-server game with a Node.js backend. No need to get fancy, a tic-tac-toe game with a leaderboard would be a start.
Pros:
You'll be making a game, which I suppose is motivating as you're posting that on TASVideos...
Unity's object and behavior model is a good example to learn OOP through composition and event-based programming
You'll be using C# (well, you could use JavaScript with Unity, but you'd be missing the point...), which is a nice and clean language, unlike C and C++
If you like, you can also give a try to shader programming
Node.js will force you (you can't begin to imagine how much...) to learn asynchronous programming, callbacks and all
It is programmed in Javascript, which will grant you Web skills too and it is a different enough language than C# to learn
I also recommend learning some of 6502, Z80 or ARM assembly. Not to be able to actually write programs with it, but to be able to grasp how the higher level code work under the hood. It makes it much easier to understand data structures (stacks, lists, trees, etc.) when you can see that it references actual memory in a structured way and is not only some abstract drawing that a teacher put on a whiteboard.
Programming is also not just about writing code. Don't neglect algorithms, design patterns, refactoring and writing beautiful code. I also second Code Complete and I highly recommend reading Clean Code (especially chapters 1 to 10 and 17, actually a quick read).
Finally, avoid "unclean" stuff like the plague untill you're confident enough it won't pollute your programmer mind. "Unclean" stuff, for me, means C, C++, x86 assembly, etc. PHP is somewhat unclean for me, that's why I suggested Node.js instead. Also, make sure you learn typed languages before untyped one (you already know Java, which is good).
Jumping into a big project right at the start is not a good way to start programming.
Furthermore, C++ is not "unclean." Now we're talking about misconceptions as mentioned earlier. Yes, C++ has inherited a lot of bad stuff from C, but if you avoid that like the plague, it's a nice, modern, very powerful, if albeit complicated, language.
And believe it or not, it's easy to write safe, modern, flexible and powerful C++ code with few to no consequences. Unlike C and assembly.