Post subject: Bobo's continuing adventures in Lua
Player (80)
Joined: 8/5/2007
Posts: 865
Get the code here! Preamble (Yeah, you can skip this if you want.) After returning to the site about two weeks ago, I was freshly inspired to take up programming again, having laid off it for several years. It's been very refreshing to scratch that itch again. Contrary to my reputation 'round these parts (to the extent I think I have a reputation), I don't consider myself a good programmer at all. I am, rather, a determined programmer with a very limited set of tools, although I hear in some circles that that's really all you need to be a good programmer. But I stand by my self-assessment. I don't even "get" object-oriented programming and every foray of mine into it has left me not only confused but outright contemptuous toward programming as a profession. "This makes no sense! This whole paradigm has to be a fiction!" (I've grown a little and am beginning to sort of understand object-oriented programming at a conceptual level, but I wouldn't dare say anything I've ever programmed has been object-oriented in any meaningful sense. Even if/when I figure it out, I'll probably always think of programs as code that, you know, does stuff, not objects. I'm a verb guy, not a noun guy.) Anyway, I've found this popular guide that had somehow slipped past me all these years and I've been following along with it and diligently taking notes. One thing that I've learned that I don't think I had a good grasp of when I last used Lua is that it uses associative arrays, which are mathematically like directed graphs (admitting loops, as well). It became clear to me why Lua's tables were always lauded as so versatile: an associative array is a generalized form of just about every data structure. Lists, linked lists, arrays, trees-- almost anything can be represented as an associative array. (Or maybe truly anything. I'm aware of more complex data structures, although they might be jury-rigged from associative arrays. In any case, they strike me as memory and/or computationally impractical.) You'll find just about all of the data structures I mentioned and more in my project below. The End So that's my life's story. I had this really versatile data structure to work with and immediately thought, "Well, what are associative arrays good for modeling? Markov chains!" When you type a word into your phone, it uses predictive algorithms based on the frequency of your previous texts to determine the most likely words you'll type next and offer them to you. I thought, "Why not do the same thing for English?" We can take collections of letters and string them together in a way that makes fake words! If my theory was right, these words would look like almost English, something that if you found it in the dictionary, you wouldn't be surprised to find there because it was just a little too obscure for you to know about it. That is why this post starts at "the end", because everything I built toward was based on constructing this Markov chain and I don't want you to leave because you're getting bored. I've represented the Markov chain pictorially below: Red circles encompass our syllables. (Throughout this post, I'll be using the word "syllables" very loosely-- it's any collection of characters, including single characters, characters with/without vowels, apostrophes, or starts/ends to words.) We start on the left, in the empty string "", then traverse the tree randomly until we land on a suffix, any string ending in _ (underscore). For illustrative purposes, I've decided to show two real English words. Follow the blue path and you'll read "_stochastic_" while the purple path leads you down "_pastiche_". Numbers in blue or purple are the weights of their associated arcs, which are proportional to the probability of traversing that arc (note the a --> st arc in both blue and purple agree that the weight is 42). For example, if the only arcs exiting "" are _s and _p with weights 89 and 77 respectively, there is a 89/(89+77) = 53.6% chance of following the "" --> _s arc and a 46.4% chance of following the "" --> _p arc. Thin lines in green represent arcs between nodes that are not taken in this random walk, but are still represented in the graph. Starts and ends of words are both represented by underscores. (So to be perfectly clear, this figure is a simplified version of what I produced. Differences include: 1) Every solitary character is represented somewhere in the full graph because no syllable pattern is perfectly consistent, 2) the syllables represented here need not correspond to syllables actually used, 3) loops are permitted (so if "ba" and "na" are syllables, ba --> na --> na is permitted), 4) arcs are permitted to go in either direction between two nodes, albeit with distinct weights (so both er --> go and go --> er are allowed with different probabilities), and 5) we traverse the graph randomly according to the weights and it is relatively unlikely that we will produce real words such as "stochastic" or "pastiche".) (Also, a quick look at Wikipedia says this might better be called a "flow network" or "flow graph". I'm not really up on the vocabulary-- ask FractalFusion.) (And finally, you may be aware that associative arrays do not naturally describe weighted directed graphs, such as the one in my first figure. Under the hood, each outgoing branch leads effectively to a single linked-list element with the weight and the destination node.) So that's the end goal. Unfortunately, a question immediately arises... Whence Syllables? Sure, I could just use this concept with individual letters, or every pair of letters, or hand-picked syllables. (In fact, that's what my rough draft of this program did.) But we're programmers! We should be able to automate this stuff. Let me introduce you to Huffman coding. These are variable-length codes that store more frequently-used symbols in shorter code words. They are the most efficient substitution cipher that can be constructed from a given file. (Other compressions may be somewhat more efficient, but as far as I can tell, they use more complex algorithms that can't be reasonably decoded by hand. I'm trying to emulate an efficient human brain here and Huffman codes are the way to do it!) I'm not going to walk you through Huffman coding here. Instead, you can stare at this figure until it makes sense to you: It's actually pretty easy to understand and once you get it, it makes a lot of sense why it's constructed this way and how it works. Also, incidentally, my program doesn't actually create any Huffman codes, it only computes the length of the code words, what I call the "Huffman score" for each character. With 28 characters (26 letters plus apostrophe and space), the naive way of encoding things is with five bits per character. This wastes many bits on common letters like e and s, which appear very often, while wasting code words on rare letters like q and z. Feeding the entire dictionary into my Huffman coding algorithm with only our solitary characters assigns three bits to code words for common characters e and _ (over 170,000 instances) while rarely used characters q and j (2,500 instances) get ten bit code words. This alone compresses the dictionary by about 10 percent! Here's a flow chart for what I'm talking about: We start with our solitary character alphabet, search for those letters in the dictionary, record their weights (the number of times they appeared), and run that information through the Huffman coding algorithm. Syllables Have More Than One Character... Right. So let's again jump ahead a little bit and suppose we have a list of candidate syllables that we'd like to find the Huffman scores for. Follow along with this diagram, concentrating on going from left to right, ignoring the feedback loop for now: We start with a list of syllables, including a through z, apostrophe, _, and some others. In the figure I've used s_ and ed as examples (s_ mostly signifies plurals, words ending in s while ed could be found in words ending in ed or words with ed anywhere else in them, such as "editor" or "credit"). We search for their frequency in the dictionary and use those weights to find their Huffman scores, as we did with our individual characters. (Another side note: The dictionary/word list I linked above, which I used throughout this project doesn't include single-letter words like "a" or "I", nor does it include contractions. I manually entered "a" and "I" (to very little consequence) and I decided to not bother with contractions. My program is still ready to work with apostrophes all the same.) Ah, but there's a problem! If you've paid close attention, you'll notice that the inclusion of new syllables changes the weights of other syllables because, for example, more instances of s_ and ed mean fewer instances of s, _, e, and d. Basically, the Huffman scores depend on the weights, which in turn depend on the Huffman scores. My solution is to use the just-computed Huffman scores to feed that back into the sample text to obtain a better approximation of the weights. Of course, this doesn't necessarily mean our compression is yet optimal. I iterate through this loop until the number of bits used to encode the dictionary is more than the previous iteration, then step back to the previous iteration and assume this local minimum is near a global minimum for the list of syllables. (Practically speaking, my program is able to reach this point in about three to five iterations, so it's computationally very reasonable and quickly homes in on a steady state. You'll also see that the final iteration(s) tend to show modest gains, maybe in the hundreds to thousands of bits out of about 8 million.) Weights are approximated via a greedy algorithm, which employs whatever syllable has the most marginal benefit over the syllables we would use if that syllable weren't there. For example, if the characters "a", "t", "i", "o", and "n" have Huffman scores of 6, 6, 5, 5, and 7 and syllables "at" and "ion" have Huffman scores of 13 and 10 while "ation" has a Huffman score of 18, its savings are 5 because if it weren't in our syllable list, we'd use "at" and "ion" at a cost of 23, not "a", "t", "i", "o" and "n" at a cost of 29. In this example, other syllables present that would save more than 5 bits are prioritized over "ation". If you want a real-world example, "qu" has outstanding savings at 16 bits per instance because it has a Huffman score of 9 compared with q and u's Huffman scores of 17 and 8, respectively. This also is a great indication that my program is working-- we'd expect "qu" to have a high priority because the vast majority of q's are followed by u's. The greedy algorithm probably isn't ideal and I don't quite have the motivation to program an exhaustive algorithm as it is very likely NP-hard and I expect its savings to be only marginal. Oh yeah, and in case two syllables have the same savings, the tiebreaker is whichever has more characters, as it shortens the target word the most. You Still Haven't Said Where the Syllables Come From We're there! We start with a list of syllables already in our extended alphabet. Follow this flowchart: I assumed that the best candidate syllables are those that are comprised of the best syllables already in our list. Two "popular" syllables are appended end-to-end (in either order) and offered as a candidate. This candidate must first meet a few criteria: 1) it cannot have any interior spaces (this would span multiple words), 2) it cannot be flanked by two underscores (this would be a full word, which again, is not what I'm trying to produce, plus it would be lame for only securing one match at most), and 3) it cannot already be in our "graveyard", syllables that have already been shown to not have matches. In the flowchart above, "qu_hyp", "ing_qu", "ing_ing_", "ing__hyp", and "_hyp_hyp" would all fail for having interior underscores. With them, "_hyping_" would fail for having two exterior underscores. Only "ququ", "quing_", and "_hypqu" are potentially viable, provided they aren't in the graveyard (which they almost certainly are, given that they look almost nothing like real English). Those three criteria are easy to check, so once a syllable has passed them, I can scan the dictionary for it. If the syllable appears at least twice in the dictionary, we add it to our formal list of syllables. (Why twice? If the syllable appears just once, it can't possibly save any space because we need to use at least the same amount of information it saves encoding it in our extended dictionary. Interestingly, this means "zyzzyva" is potentially a viable syllable because "zyzzyvas" is also found in the dictionary and they've got some really expensive letters in them. So far, "zyzzyva" hasn't shown up in my searches, but maybe I should just program it in manually and see what happens...) If a syllable appears once or never, we add it to the graveyard as a quick reminder that we never need to search for it again. I arbitrarily decided to grow my list of syllables just 30 at a time under the assumption that growing it too aggressively is not only computationally expensive but also might lead to certain "instabilities" in the algorithms, causing it to under- or over- score some syllables. Also, I've done nothing to prune syllables that made the cut but haven't been useful because they have negative savings. I did briefly investigate it by assigning syllables random Huffman scores over 50 or so trials and found that out of about 1,200 syllables that made the cut, only about 100 ever show up in any resulting extended alphabet. I'm being very conservative, assuming that perhaps two syllables might be junk but their child might produce major savings. After adding enough syllables to my satisfaction, all that remained was producing the graph according to how often each syllable transitioned to each other. So... Can I Have a Recap? (Or TL;DR?) Sure:
  1. Start with the dictionary and our usual alphabet, plus apostrophe and _. Convert the text to lowercase and replace everything with underscores except letters and apostrophes.
  2. Search the dictionary for each letter, assigning a weight to the letter according to how many times it appeared in the dictionary.
  3. Use these letter frequencies to compute a Huffman score for each character.
  4. Merge any two popular characters into a syllable. If this syllable passes a few viability tests, not least of which is appearing at least twice somewhere in the dictionary, add it to our list of syllables. Create 30 (or however many) new syllables in this manner. To give them the best chance of success, make each new syllable cost just one bit.
  5. Using our now extended alphabet, encode the dictionary again and use the resulting weights to compute the Huffman scores. This first encoding of the dictionary is completely artificial since we arbitrarily assigned the new syllables a Huffman score of 1, which can't possibly be done. The output Huffman scores, however, are possible, albeit not ideal.
  6. Repeat step 5 until the number of bits needed to encode the dictionary is minimized.
  7. Return to step 4 and repeat from there, growing our list of syllables as much as desired. I ran my program on this loop for about six hours and got the dictionary down from 1065 KB to 892 KB. I think if I ran the code for much longer and made some improvements, I could get that down significantly lower, maybe in the ~800 KB range.
  8. Optionally, conduct a "losers' tournament", omitting all syllables with at least one bit of savings and force it to use syllables that have negative savings, then try to fold these syllables back into our list. I did this and found some interesting losing syllables, but if I remember correctly, re-merging them with our list didn't produce any savings. None of them stuck.
  9. Optionally, assign each syllable random Huffman scores and savings and see if we've accidentally excluded some good ones because they were never given a proper chance. I did this for ~50 iterations and found that over 90 percent of syllables are never used and will probably do just as much good in the graveyard. I put these syllables in a new table I called "hospice", but I haven't done anything else with them.
  10. Optionally, just add some syllables manually. I haven't done this, but I have been disappointed that "olog" hasn't made the cut, since it permits --> "ic", "ic" --> "al_", "y_", "ies_", "ist", "ist" --> "s_", "ism", "ism" --> "s_", "ize", and "ize" --> "s_", all off the top of my head. Instead, "ologi" is the form my program prefers, which admittedly is still pretty good. Anyway, the point is that brute calculation might not produce the best results, but I certainly can draw inspiration from syllables that have made the cut.
  11. Using our finished list of syllables, construct a weighted directed graph representing the likelihood of transitioning from each syllable to each syllable, if permitted.
  12. Randomly walk this graph according to the arc weights.
  13. Optionally, clean up the results by excluding a) words with no vowels (sorry "hmm", "shh", and "cwm"), b) words with letter triplicates (nooot alllowwwed), c) words already in the dictionary (because, you know, they're supposed to be fake words). I wrote this code but never actually incorporated it...
For Goodness' Sake, Bobo, Does it Work??? Hell yeah! I mean, I think there are probably a few errors in there, including a potential miscalculation of _ frequency, which I just thought of. But leading syllables (most savings per instance) include:
  • abilit (top saver, encoded in 10 bits and saving 18 bits)
  • nesses (but why not "nesses_"?)
  • qu
  • ously
  • ologi
  • ities_
  • comp
  • ing_
  • ation
  • ight
There are also some surprises soon after that, including _hyp, nde, cula, and vel. But the point is that these all look like English, so they're great building blocks. And what do these building blocks produce?
  • sapsuopperagentization
  • graphed
  • wols
  • claympherulations
  • precounhism
  • compens
  • animseltismicalcalvol
  • sestoushs
  • st
  • tymphaturate
  • kaziontration
  • troussed
  • asurgs
  • overterressign
  • distirdignallle
  • fingulticulas
  • coggamioncoded
  • toolatibilchernosittptindiness
  • devolideously
  • henlists
  • initalg
  • dists
  • fibrininesses
  • nated
  • slementarysting
  • ser
  • comftieingudredert
  • in
  • whectoman
  • fibrisequambativenesses
  • unbirringenufficadricalise
  • oilboateangeraugherack
  • isal
  • provengwintibal
  • nit
  • punatrever
  • ser
  • drinationisttlefonter
  • earities
  • targeourals
  • nah
  • drawfillogogea
  • regs
  • rastical
  • lan
  • jophalization
  • addishaporitides
  • poutheter
  • salize
  • grothinoituchygly
Prostetnic Vogon Jeltz, eat your heart out! Get the code here!
Skilled player (1673)
Joined: 7/1/2013
Posts: 453
Bobo the King wrote:
Prostetnic Vogon Jeltz, eat your heart out!
Ha!
Player (80)
Joined: 8/5/2007
Posts: 865
Get the code here! The Ising Model of a Ferromagnet A two-part adventure... I've now completed half of a long-term dream of mine: implementing the Ising model of a ferromagnet. This is a really cool physics exercise that attempts to explain (at least qualitatively) why iron can be magnetized but beyond a certain critical temperature (the Curie temperature), that magnetization disappears. We imagine a chunk of iron as flattened out in two dimensions on a grid, made up of individual iron atoms. The magnetization of each of these atoms (spin-up or spin-down, we call them) is influenced by its nearest neighbors. If its neighbors are all spin up, it will tend to be spin up as well and likewise vice versa. If this were all there were to ferromagnetization, the picture would be very boring: single atoms flip until three of their neighbors are alike, then they remain stuck in that position. The extra wrinkle comes from randomization, which in the realm of physics is manifested through temperature. Give the material a nonzero temperature and there is a nonzero probability that each dipole will flip despite it being energetically unfavorable to do so. This probability of spontaneously flipping is e^(-U/kT), where U is the difference in energy between flipping and not flipping, T is the temperature, and k is Boltzmann's constant (e is, of course, 2.71828...). The justification for this comes from Boltzmann statistics and something called the Metropolis algorithm but if you're looking for a detailed explanation here, you won't get it from me because it's been a long time since I studied this stuff, I wasn't very good at it in the first place, and I don't want this post to ramble on and on and on... as I've been known to do. But knowing physics isn't even necessary to translate the pseudocode into Lua! This is a standard simulation and Thermal Physics by Schroeder is able to knock out the pseudocode in about a page, maybe 40 lines or so. I've tried to gussy it up a bit for Lua's quirks and my own version is about 100 lines. I can't say I've learned much from this project, but one thing that came as a surprise is that using floor(2*math.random() - 1) to set my initial state caused everything to go bonkers and I ended up with absurd results: an initial state that was graphically random yet showed a strong net magnetization numerically, which then migrated to a state that was dominated by the spin-up state yet showed no net magnetization numerically. I still have no idea what was causing that. Just use 2*math.random(2) - 3 instead. All right, if you're skipping all my paragraphs (and I don't blame you if you are), you should know that this project produces pretty pictures (well, "pretty" by a fairly low standard). Let's start with the most boring, in which I set the temperature to 3: White represents spin-up, black represents spin-down. If the temperature were high enough, the pixels would be completely random and although there doesn't seem to be much order in the above image, if you look closely you'll find that there's a little bit of "clumping" and pixels have some inclination toward coalescing in small clusters. We get more interesting domains when we set the temperature to something like 1.5 and run it for a couple minutes: Well now I'm sure you believe me when I say that pixels are likely to coalesce in clusters. Any small clusters get "eaten away" and erode to zero pixels rather quickly. This also smooths out the boundaries of the large clusters that we see for much the same reason-- any sharp protrusions represent pixels that are more likely to be surrounded by three pixels of the opposite color, causing them to flip. (On a tangential note, the pattern produced resembles a Turing pattern and I'm sure someone has made the connection between the two, showing that at low temperatures, the Ising model produces some manifestation of Turing patterns. There's also a great tutorial on how to produce Turing patterns in PhotoShop in a few minutes, although I much prefer the free online alternative, Photopea.) We've shown in this crude model that this material can be "magnetized". That is, once the spins are preferentially aligned in one direction (say, by imposing an external magnetic field), they remain aligned and will persist. You can then carry this magnet around and observe its net magnetization by sticking it to a refrigerator or something. But to be clear, this is apparent in the above picture because it permits large structures-- structures that span the entire image. You can easily navigate a path that runs from one side of the image to the other while staying entirely within the black domain. So I've shown you one image corresponding to a "high" temperature that has tiny clusters and a second image corresponding to a "low" temperature with large clusters. The interesting thing-- to me at least-- happens when we strike a balance between the two. Can we find a "critical temperature" that exhibits structure at all scales? Yes, here it is at 2.2692: Edit: The above image isn't wrong by any means-- it is at the critical temperature and I ran it for a good long while. But it turns out that the system equilibrates very slowly when near the critical temperature. I've gotten around this issue. See my following post. This may not seem like much at first, but if you look closely, you'll see that it's a bit "clumpier" than the T=3 image and there are in fact clusters that are large as well as small. (I'm not quite sure how to quantify this, but I've long imagined to do so we'd pick a pixel at random and it would be just as likely to be part of a tiny one-pixel "clump" as it is to be part of a massive grid-spanning clump or anywhere in between.) This structure at all scales gives rise to some interesting predictions. I once saw a seminar in which this phenomenon was demonstrated by allowing material to precipitate out of a solution, turning a cloudy white in an instant, the color being a product of the Rayleigh scattering of light of all frequencies by particulates of all sizes. This was then used as a metaphor for precipitation events in the stock market (no, I'm not kidding). And it is this same structure at all scales that will motivate (I hope) part 2 of my little project, a second, unrelated piece that I'm actually most interested in. Stay tuned...
Patashu
He/Him
Joined: 10/2/2005
Posts: 4045
This is really pretty!
My Chiptune music, made in Famitracker: http://soundcloud.com/patashu My twitch. I stream mostly shmups & rhythm games http://twitch.tv/patashu My youtube, again shmups and rhythm games and misc stuff: http://youtube.com/user/patashu
Player (80)
Joined: 8/5/2007
Posts: 865
Quick update: I felt a little guilty about posting the critical temperature image in my previous post since I had to follow it with, "If you look closely, you'll see that it's a bit 'clumpier' than the T=3 image." It was kind of disappointing to me because it didn't really exhibit the structure at all scales I'd hoped to show. Yet I pretty much knew it was correct because the program is pretty simple and it definitely worked in temperature domains above and below the critical temperature so I just rolled with it. Well, there's a reason the image was disappointing. Here's what Wikipedia has to say:
Wikipedia wrote:
It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.
So the issue was that I had started from a random state and thanks to the underlying mathematics, it was converging to the "structure at all scales" state very slowly. I'm not familiar with Niedermayer's, Swendsen-Wang, or Wolff algorithms, nor do I want to double the size of my program or worse by attempting to implement them. Instead, my workaround was to adjust the temperature. Through a handful of tests at various temperatures, I was able to determine a "settling time" of about 500 frames over which clusters coalesce when the temperature is below the critical temperature and clusters break apart when the temperature is above the critical temperature. I therefore set the temperature to oscillate every 500 frames, decreasing the magnitude of its oscillations by 0.8 each time. The final temperature is still the critical temperature, 2.2692, it just approaches it asymptotically. Here's what it looks like after having run for about 20,000 frames, about 40 cycles: The temperature agrees with the critical temperature to the fourth decimal place. I hope you now believe in structure at all scales. Here's the updated code.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Bobo the King wrote:
I don't even "get" object-oriented programming and every foray of mine into it has left me not only confused but outright contemptuous toward programming as a profession. "This makes no sense! This whole paradigm has to be a fiction!" (I've grown a little and am beginning to sort of understand object-oriented programming at a conceptual level, but I wouldn't dare say anything I've ever programmed has been object-oriented in any meaningful sense. Even if/when I figure it out, I'll probably always think of programs as code that, you know, does stuff, not objects. I'm a verb guy, not a noun guy.)
I will just comment a little bit on this to tell you how it is to have programming as a profession, and maybe you can take from it what's better for you. First of all, working with programming as a profession involves problems very different from those you attacked in this post. Although you can get paid to do something similar to using data structures and numeric simulations, from my experience there's a reversal of logic in these roles. Usually they already have a result they want to achieve, either because they want to sell something or got funding to study if it's the best, and you have to come up with a method to achieve it, and the ethics of this are very questionable. In any case, it must be clear to you that doing something for other people is quite different than doing it for yourself. Just look at the difference between cooking for you and your friends and opening a restaurant. Because of this, most problems you will get paid for solving have the following characteristics: 1) the person asking you to solve them usually has no idea of how it is solved, and doesn't really want to know how (just ask yourself, if someone knew all the secrets to cooking, would they pay a restaurant to make food for them or do it themselves?) 2) they are "dumb" problems, in the sense that it's pretty clear that they have a solution, but it takes a lot of effort to implement them 3) the biggest difficulty is in scaling, because for any business to be profitable, it needs to have lots of customers. because of that, the limitation is usually on IO, not on processing time, and the theory of distributed systems is much more relevant than standard algorithms to solve them. 4) it's inefficient to do something from the ground-up rather than something which is already working, so you need to be willing to work with technologies you do not like. 5) Almost all problems can be solved using a CRUD design. This actually becomes fun after some time, because you start trying to figure out how most software you use works by applying this principle :P Also, I'd like to say that it's perfectly fine to not like OOP, it recently has received some criticism in the industry, and lots of people are using functional programming concepts along with it. However, if you get used to the problems most programmers need to handle, you will find out that OOP is a pretty reasonable paradigm, and even if you are not interested in them, you can use it to make the code that solves the problems you find interesting accessible to nontechnical people. I remember using a complicated algorithm exactly once to solve a business related problem. Some criminals had managed to commit credit card fraud by creating fake accounts, and we thought they had made a mistake by using a small number of devices to create accounts for many document numbers. In the logs, we could see the device-id that created the account for a given document number, and also list all document numbers that a device-id interacted with. From this, I could construct a bipartite graph, and using depth-first search starting from a number of accounts where the fraud was confirmed, I managed to get the entire graph, which not only allowed us to track the devices used, and also which accounts were affected, and we could close some before they stole more money. However, this is the exception, rather than the rule, and if you're willing to consider programming as a profession, ask yourself these questions: 1) Are you willing to see yourself more as a consultant for the business that handles technical issues, rather than a programmer who's paid to write code? 2) Are you willing to understand how to sell a product so that both you, and the person paying you, are successful in the end? 3) Are you willing to spend a considerable amount of your day explaining simple things to non-technical people? 4) Are you willing to use technologies you do not like in order to deliver a product? In any case, best of luck to you and welcome back. The code you wrote is pretty cool!
Player (80)
Joined: 8/5/2007
Posts: 865
That's a very thoughtful comment, p4wn3r, and I hope you copy and paste it to other forums so it reaches a wider audience. I've long had a sort of vague aversion to programming-- there's a whole lot of stuff I don't get (currently struggling with closures) while a lot of the stuff I do get sits uneasily in my brain. As an analogy, I hated writing reports in school and since graduating, I've had many opportunities to read students' reports, provide feedback, and even produce some of my own. I've grown to understand that while I was pretty bad at writing reports in college, I've become pretty good at it since then. (After all, these latest posts of mine are nothing but informal reports.) But assign that to me as a task or make it my career path? Blech, please no. That's much how I feel about programming and I think just about anyone reading this can relate somehow; we've all got subjects that give us less satisfaction despite being at least okay at them. Regarding OOP specifically, I think one reason I've never much liked it is because I've simply managed to avoid any tasks for which it would be a good paradigm. Two places I've seen it used (at least potentially) are in GUIs and video games. When I now look at my possibly favorite game, Mario 3, I can easily imagine each enemy as an object, assigned aspects like type, position, and behavior. You'd almost be a fool to avoid OOP when making a video game, or at least one in the vein of Mario 3. Another issue is that when I was forced to learn Java one semester back in college, I had a very poor understanding of hardware architecture and what goes on behind the scenes. I obviously knew about hard disk space from a practical standpoint and had some notion of using magnets to flip bits, but how hard disks differed from RAM, especially in their access times, was pretty foreign to me. Gaps in my knowledge such as that made it such that when I was programming in Matlab a few years later, I didn't understand why my programs would run so slowly when I had arrays growing dynamically as space was needed. The preallocation of memory that we were forced to do under Java always elicited a "huh?" from me and I now see that having your data structures grow without strict bound puts a lot of pressure on cleanup routines that have to copy and shove nearby data out of the way to make room. It was only when I started learning assembly for TASing that I began to make these important connections: computers are a form of Turing machine, allocating memory is crucial, and object-oriented programming has an important place in the way we code. Now whenever I'm coding or struggling with a new concept, I always have an eye toward what the corresponding assembly language might look like, at least conceptually. I think I've made it clear, but just to reiterate, none of that means I'm good at any of the above, just that I think I can connect the dots in the big picture in a way that ten years ago, I'd have said it's all completely dumb and arbitrary. Even though a handful of people in my life have suggested, "Why don't you get into programming as a career path?" I've always been resistant to the idea. Your list of questions bolsters my stance, as none of them screams to me, "Ah yes, that'd be the life for me!" On a highly tangential note, I sometimes think back to a community college job I once applied to and the professor I was to work for wanted me to write a short Word document. I did one or two basic formatting changes-- I think I centered the text-- and he remarked, "Wow, you're much better at this than I am!" (I somehow didn't get the job, despite Professor Dinosaur not being able to use a word processor.) That's what I think of when I imagine programming as a career: your boss has no idea what you're doing so he throws a pittance in your direction so you can do a bunch of basic stuff. There are "sexy" programming jobs out there, but they're highly competitive and I already pretty well know I can't function at that level and deal with criticism and demands from my bosses and/or colleagues. Instead, I'd be left with the gruntwork. Programming is really a weird career path where there's this huge disparity of need between, as you point out, companies that just need basic tasks done because everyone there is tech-illiterate versus domains where coding fast, elegantly, concisely, efficiently, or creatively actually gets you a six figure salary and some prestige. On the whole, it strikes me as a profession where being positively elite and having "the mind of a programmer" will get you everything you want, but for people who get roped into it because they hear people chanting "STEM" but otherwise show no great aptitude for it, programming will earn you a living, just not a particularly good one. By the way, I have an upcoming project for which OOP is definitely a good paradigm choice and I'm hoping I can scrape something together without falling flat on my face.
Warepire
He/Him
Editor
Joined: 3/2/2010
Posts: 2178
Location: A little to the left of nowhere (Sweden)
Software developer as well, different field from p4wn3r. (I am mostly posting this to show that software development has different areas with different challenges) I work with embedded systems, meaning that the products are not just software, but a physical device as well. This changes the aspects a little bit: 1) You don't write just one program, there's usually a whole bunch and they interact with each other. 1b) Most of those programs will usually be relatively simple 2) You will encounter OOP, but most of it is going to be of the level you described in your original thoughts about when OOP makes sense. 3) You will be integrating and debugging a lot of others code, sometimes more so than writing your own. 4) You will dive into several programming languages. (So far for me it has been C, C++, Python and some Assembly, but languages like Go and Rust are making a strong entrance, and I am sure I one day will encounter Lua as well) 4b) You will dive into several ways of compiling stuff So, what usually happens is that a customer needs a device, they have a set requirements for the device. My task (together with my co-workers) is to get the software in the device to make the device do what the customer needs it to do. The problems we face when working on this are somewhat like p4wn3rs problems, in that they are "dumb" problems unless you manage to land a job in research. The major change is that most embedded devices can't go around doing distributed work because data plans are expensive and a recurring cost for your customer, however, this also affects the problems you are presented with in the sense that they often don't require a lot of complicated computations. Sometimes you need to do things from the ground up because the pre-existing solutions are either too expensive, written in a language you cannot use in the product (reasons vary from product to product), or it's open source with a license that you cannot put in your product (but you are rarely the one that has to understand the legalese, there's other people for that in these companies). There will be less explaining to non-technical people, you will mostly be talking with your co-workers (who are working on the same embedded systems as you are) and engineers from other companies for both hardware and software when you need help using something and you cannot work it out yourselves. There will still be some explaining to non-technical people, but most of the time those things are handled by someone else. Caveat: None of the systems I work with come with a Display. I have no idea if systems with a Display changes the outline I gave above or not. The above also changes the questions: 1) Are you interested in solving problems with varying and changing difficulty? (sometimes easy, sometimes hard, sometimes they change shape as you go) 2) Are you willing to go on a lot of adventures? (You will be encountering technologies you're not familiar with, and need to read documentation, you may not enjoy these technologies) 3) Are you willing to have days when you are absolutely stumped over why something isn't working and getting no feedback, sometimes feeling like you're making reverse progress? 4) Are you willing to work close to hardware, and deal with bugs that may be caused by hardware design? (not necessarily a broken design, but quirky) 5) Are you willing to work in a field that very often have VERY hard deadlines? (you often need to put a product on the market before a certain date to be at all viable) I fondly remember one product where we had power saving requirements that required us to turn off a device on PCI Express when we didn't use it, and the PCI Express controller didn't support hotplug (meaning once the device removed, it cannot find the device again without a reset). So I had to figure out how to make the driver for the PCI Express controller reset the controller before we powered on the device without rebooting the system, which was the only way the operating system in the product could reset the PCI Express controller. I liked that challenge because it was the first time we had done a product with PCI Express in it. Those sort of challenges are not that frequent, there are also a lot of "When X happens, make product do Y", and lots in between.
Player (80)
Joined: 8/5/2007
Posts: 865
Thank you for your additional thoughts, Warepire. Much of what you've written seems to overlap nicely with TASing-- working with different hardware systems, different levels of challenges, and days where you're completely stumped. I love this hobby, but it doesn't strike me as the right career path for me. Maybe it will lead me in a new direction, though. I'm still slogging through the Programming in Lua guide and although much of it is going over my head (section 8.2 on "C Packages"?), I do feel like I'm opening new doors to my understanding. I have a couple of small updates to my projects. The more interesting one comes from my Ising model. I realized that gui.drawPixel is computationally expensive and even though I cut down on its use by drawing only the less common color against a backdrop of the more common color, I still needed to go about it more efficiently. I came up with two solutions: draw line by line or block by block. Line by line is easier to implement, scanning pixels horizontally until it finds a change in color, then drawing a line knocking out a whole bunch of pixels at once. The alternative, block by block, subdivides the screen into a grid of 16x16 blocks of size 15x10, then finding the dominant color (the magnetization) within each block and drawing the pixels within line by line. This is apparently a sort of crude version of what is known as the "quadtree" graphics algorithm. The block by block algorithm has the advantage of using the dominant color of each region, which is nice because thanks to the structure at all scales feature, we expect large regions of the non-dominant color. Its disadvantages include being more difficult to program and "wasting" truncated line by line executions when, say, a black line needs to be drawn at the end of one block and it continues into the next block, necessitating two executions of gui.drawLine. (By the way, I discovered along the way that gui.drawLine does not accept single pixel inputs, such as gui.drawLine(30,40,30,40,"black"). As such, I had to cram both gui.drawLine and gui.drawPixel into one function I call line.) So which algorithm wins? They're pretty closely matched, but line by line is consistently more efficient. It speeds up the whole program by a factor of two to three, so I am now able to achieve a nice smooth 30 fps. It used to take a few hours to get the field to "equilibrate" and now it takes roughly an hour, maybe a bit less. Below is a gif of the program's output set to 30 fps, matching what I see when I run it on my computer: Here's a link to the updated program. To cycle through the draw styles, press the D key. I've left the third draw style empty, but if you'd like to compare it with pixel by pixel, you can uncomment line 287. You can also display the magnetization and temperature by pressing space or enter, respectively. The other minor update I have is to my fake words project. I had a "duhhhhhh!" moment and realized my method for adding new syllables was kind of ridiculous. The way they should be added, as far as my best guess, is to construct the network of syllables and add new syllables according to the product of their weights and total Huffman scores, the total number of bits used on those syllables. Unfortunately, I found editing my script rather confusing and I had to suppress a couple of errors so when I ran it for a few hours it only sort of worked. Some of the syllables it picked up were quite sensible: "abilitie", "ucle", "ously_", "ograph_", and "ologies_", for example. But the vast majority of the surviving syllables are just two characters in length, with only 56 (plus the original 27) being used at all, far too few to make me think that the program is working properly. Tons of other good syllables ("izations_", "ifying_", "super", "olog", "edly_", "ability_", "fulness", "ish_", etc.) are in my spreadsheet, but for some reason they have negative savings or zero instances. I've decided much of the program needs a rewrite. I'm planning on implementing an object oriented paradigm of data structures, which should be extremely well-suited to the directed graphs and Huffman trees involved in the project. If it works as well as I expect, I should be able to cut down significantly on the length of the program and make it much easier to debug. This is in addition to two or three other object oriented projects I have planned. Edit: And I just now noticed that most individual letters pick up only ~200 instances, with some letters picking up 0. Yeah, this is broken. Anyway, happy new year! I'm going to keep plugging away at these projects and I swear that I if I manage to follow through on them, you'll see some interesting results.
Patashu
He/Him
Joined: 10/2/2005
Posts: 4045
Really pretty!
My Chiptune music, made in Famitracker: http://soundcloud.com/patashu My twitch. I stream mostly shmups & rhythm games http://twitch.tv/patashu My youtube, again shmups and rhythm games and misc stuff: http://youtube.com/user/patashu