1 2
6 7 8
Joined: 7/2/2007
Posts: 3960
n&1 is two operations: a bitwise and, and then a comparison of the result of that and zero. n%2 == 0 is likewise two operations, but they're explicit. (n>>1)<<1 is three operations: two bitshifts, and then the comparison with zero. Of those three options, I'd go with the one that people are used to reading. As for peppering your functions with inline assembler, I'd wager that the compiler can create more efficient assembler than you can in the vast majority of situations. If you're actually better at assembler than the compiler is, then go ahead...so long as you comment the assembler extensively, and it's only used where it actually makes a difference.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Joined: 10/3/2005
Posts: 1332
Derakon wrote:
As for peppering your functions with inline assembler, I'd wager that the compiler can create more efficient assembler than you can in the vast majority of situations. If you're actually better at assembler than the compiler is, then go ahead...so long as you comment the assembler extensively, and it's only used where it actually makes a difference.
Besides that, wouldn't peppering the code with x86 assembler make it impossible to build for an x64 target?
Joined: 9/6/2009
Posts: 24
Location: Renton, WA
Note that n%2 and n&1 don't mean the same thing... (-1)%2 is -1, but (-1)&1 is 1. The code for "int n = ...; return n%2" is much longer/slower than the code for "int n = ...; return n&1". For this reason, I would use n&1 instead of n%2 (or else make sure that n is an unsigned int). (gcc 4.3.4 is able to optimize "(n%2)!=0" to use the same fast code as "(n&1)!=0", but I prefer not to depend on the compiler being smart enough to implement non-obvious optimizations. I don't know whether other compilers include this optimization or not.)
Joined: 4/7/2008
Posts: 117
cwitty wrote:
Note that n%2 and n&1 don't mean the same thing... (-1)%2 is -1, but (-1)&1 is 1. The code for "int n = ...; return n%2" is much longer/slower than the code for "int n = ...; return n&1". For this reason, I would use n&1 instead of n%2 (or else make sure that n is an unsigned int). (gcc 4.3.4 is able to optimize "(n%2)!=0" to use the same fast code as "(n&1)!=0", but I prefer not to depend on the compiler being smart enough to implement non-obvious optimizations. I don't know whether other compilers include this optimization or not.)
Non-obvious? Are you kidding? Any compiler worth a grain of salt does that, even old Borland I'm pretty sure. Also, you can't say `(-1)%2 is -1` because this is undefined according to the language. It very well could come out 1. That said, because compilers will optimize this to `n&1`, it then becomes trivial to see `n%2` == `n&1` == `n&1`
Joined: 9/6/2009
Posts: 24
Location: Renton, WA
GMan wrote:
cwitty wrote:
(gcc 4.3.4 is able to optimize "(n%2)!=0" to use the same fast code as "(n&1)!=0", but I prefer not to depend on the compiler being smart enough to implement non-obvious optimizations. I don't know whether other compilers include this optimization or not.)
Non-obvious? Are you kidding? Any compiler worth a grain of salt does that, even old Borland I'm pretty sure.
I did some more testing, and it's certainly not a universal optimization. gcc 3.4 produces an instruction sequence for (n%2)!=0 that is one instruction longer (and presumably is slower) than (n&1)!=0.
Also, you can't say `(-1)%2 is -1` because this is undefined according to the language. It very well could come out 1. That said, because compilers will optimize this to `n&1`, it then becomes trivial to see `n%2` == `n&1` == `n&1`
This was undefined in C89, but C99 (the current official standard for the C programming language) does define it such that (-1)%2 is -1. I didn't understand the last sentence... was one of your `n&1` at the end supposed to be something else?
Joined: 4/7/2008
Posts: 117
cwitty wrote:
This was undefined in C89, but C99 (the current official standard for the C programming language) does define it such that (-1)%2 is -1. I didn't understand the last sentence... was one of your `n&1` at the end supposed to be something else?
Oops, I'm a C++ programmer, it's undefined there. My last one was showing the transformation, poorly. Replace the first == with ->. Even so, I'm utterly surprised the optimization isn't universal, I swear I've seen an article on it before. You had -O3 and stuff?
Joined: 9/6/2009
Posts: 24
Location: Renton, WA
GMan wrote:
Oops, I'm a C++ programmer, it's undefined there. My last one was showing the transformation, poorly. Replace the first == with ->. Even so, I'm utterly surprised the optimization isn't universal, I swear I've seen an article on it before. You had -O3 and stuff?
I had -O3, but not "stuff" :)
$ gcc-3.4 -O3 -S foo.c
Since n%2 means something different than n&1, being able to optimize n%2!=0 would probably require a special case in the optimizer exactly for that expression (at least I can't think of a general-purpose optimization that would help, although I'm not an expert on such things). Maybe you're thinking of the case where n is unsigned? I expect in that case, n%2 -> n&1 is essentially universal. (BTW, I'll bet that a lot of C++ compilers always treat n%2 as if it were defined the C way. That's how essentially all hardware will implement a%b; if a%2 gives a different answer than a%b when b happens to be 2, you'll have unhappy users, even if the behavior is allowed by the standard.)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
arflech wrote:
Would it be too hacker-ish to test for an odd integer with n&1, or to replace n%2 with (n>>1)<<1, or to pepper your functions with inline asm?
Those are good examples of the "null optimizations" I was talking about. Any compiler which is newer than about 15 years will be able to optimize such expressions. (Actually, there's a fourth type of "hacker optimization" which I forgot to mention: The negative optimization. In other words, something which might look like an optimization but that causes the code to actually be slower than the straightforward solution. Sometimes trying too hard to optimize simple operations can cause negative optimization because the only thing you are achieving is confusing the compiler.)
Joined: 7/2/2007
Posts: 3960
Warp wrote:
(Actually, there's a fourth type of "hacker optimization" which I forgot to mention: The negative optimization. In other words, something which might look like an optimization but that causes the code to actually be slower than the straightforward solution. Sometimes trying too hard to optimize simple operations can cause negative optimization because the only thing you are achieving is confusing the compiler.)
This is exactly why I said to use a profiler to tell what code needs to be optimized, and then verify after the fact that you actually made things faster. Optimizing is a tricky business, and it's easy to fool yourself. Random question: which of these two do you guys prefer? Case 1:
int foo;
if (probablyTrueBoolean) {
    foo = 1;
}
else {
    foo = 2;
}
Case 2:
int foo = 1;
if (!probablyTrueBoolean) {
    foo = 2;
}
I'm not talking strictly in terms of execution speed; mostly in terms of readability.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Emulator Coder
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
I find the second more readable. Unneeded extra code logic only serves to need to remember more when analyzing code, causing one to lose focus. But I find this the most readable:
int foo = probablyTrueBoolean ? 1 : 2;
It allows me to immediately see that foo is being initialized depending on a bool and move onto the real meat of the code.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Nach wrote:
int foo = probablyTrueBoolean ? 1 : 2;
That's too verbose! How about:
int foo = 1 + probablyTrueBoolean;
;)
Emulator Coder
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
Warp wrote:
Nach wrote:
int foo = probablyTrueBoolean ? 1 : 2;
That's too verbose! How about:
int foo = 1 + probablyTrueBoolean;
I would do that if the situation called for it, which I don't think it does in this case. But in general, the amount of verbosity used should be in relation to how much the code does. A big block of code should be doing a big block of things. A small one liner should be doing some simple math, calling, or init routine. Otherwise the code misleads you into thinking non important sections are important and vica versa. The same can be said with ultra descriptive variables that are used as a temporary for one line.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
Editor, Active player (297)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Warp wrote:
int foo = 1 + probablyTrueBoolean;
;)
Taking hint from the probable values, for greater documentation value, I would rather write it as
int foo = 2 - !probablyTrueBoolean;
... if I were so inclined. Not to negate anything Nach said.
Joined: 7/2/2007
Posts: 3960
Heh. I'd forgotten about the ?: operator, since it's not available in Python, which is my day-to-day working language these days. Similar constructs exist ("foo = 1 if probablyTrueBoolean else 0") but I still find such statements usually pack too much meaning onto one line. I tend to opt for the second of my two posted styles, mostly because it bugs me to have a variable be uninitialized for any length of time. Paranoia, I guess, especially in situations like this where it literally cannot be read before it gets its value set.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
Warp wrote:
arflech wrote:
Would it be too hacker-ish to test for an odd integer with n&1, or to replace n%2 with (n>>1)<<1, or to pepper your functions with inline asm?
Those are good examples of the "null optimizations" I was talking about. Any compiler which is newer than about 15 years will be able to optimize such expressions. (Actually, there's a fourth type of "hacker optimization" which I forgot to mention: The negative optimization. In other words, something which might look like an optimization but that causes the code to actually be slower than the straightforward solution. Sometimes trying too hard to optimize simple operations can cause negative optimization because the only thing you are achieving is confusing the compiler.)
I remember a similar issue with Javascript, like all numbers are 32-bit floating point, so bit-shifting an "integer" is actually less efficient than division by 2.
i imgur com/QiCaaH8 png
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
The ?: operator in C/C++ can be quite handy to express things more briefly, but should of course be used with care. The real fun begins when you start nesting ?: operators. It can get quite confusing quite soon. Even then, though, there are still situations where nesting ?: can be acceptable. For example, I consider this to be an acceptable usage:
bool Point::operator<(const Point& rhs)
{
    return x != rhs.x ? x < rhs.x : y != rhs.y ? y < rhs.y : z < rhs.z;
}
Not that an if-elseif-else block wouldn't do the same, but it's more verbose.
Editor, Active player (297)
Joined: 3/8/2004
Posts: 7469
Location: Arzareth
Warp wrote:
For example, I consider this to be an acceptable usage:
bool Point::operator<(const Point& rhs) const
{
    return x != rhs.x ? x < rhs.x : y != rhs.y ? y < rhs.y : z < rhs.z;
}
I prefer to write code like that like this:
bool Point::operator< (const Point& rhs) const
{
    if(x != rhs.x) return x < rhs.x;
    if(y != rhs.y) return y < rhs.y;
    if(z != rhs.z) return z < rhs.z;
    return false;
}
The good thing about this approach is that it can be extended as the number of members grows without the code becoming unsightly. It can naturally be shortened as:
bool Point::operator< (const Point& rhs) const
{
    if(x != rhs.x) return x < rhs.x;
    if(y != rhs.y) return y < rhs.y;
    return z < rhs.z;
}
EDIT: Oh look at that, I missed the fact that your Points were 3D at first reading. Case in a point. Pun coincidental.
Joined: 4/13/2009
Posts: 431
Warp wrote:
The ?: operator in C/C++ can be quite handy to express things more briefly, but should of course be used with care. The real fun begins when you start nesting ?: operators. It can get quite confusing quite soon. Even then, though, there are still situations where nesting ?: can be acceptable. For example, I consider this to be an acceptable usage:
bool Point::operator<(const Point& rhs)
{
    return x != rhs.x ? x < rhs.x : y != rhs.y ? y < rhs.y : z < rhs.z;
}
Not that an if-elseif-else block wouldn't do the same, but it's more verbose.
If you do that, then you need to format it so that it is more readable. For example:
bool Point::operator<(const Point& rhs)
{
    return x != rhs.x ?
        x < rhs.x : y != rhs.y ?
        y < rhs.y : z < rhs.z;
}
It's funny how much this thread has derailed from C to C++, though... :rolleyes:
Former player
Joined: 6/3/2008
Posts: 136
Location: US
Mix it up eh? Post some C# and Java lines. I'm currently taking these 2 classes so give me a reason to be on here during class. :P
public class Test {
   public static void main(String[] args) {
     String s = "Java";
     StringBuffer buffer = new StringBuffer(s);
     change(buffer);
     System.out.println(buffer);
   }

   private static void change(StringBuffer buffer) {
     buffer.append(" and C#");
   }
}
<3
Trained by Cpadolf. Mission: To Perfect. Hero says: Yeah bro, I almost went super saiyan once My SM-RBO Current Project(s): 1)I don't TAS anymore! Pay me!
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
I don't know whether you've found this out yet, but all you need to do to be able to compile C# code from the command line is to add the .NET Framework to your PATH environment variable; for C# 2008 put this in:
%windir%\Microsoft.NET\Framework\v3.5;%windir%\Microsoft.NET\Framework\v2.0.50727;
Now just type csc to use the C# compiler, vbc to use the Visual Basic.NET compiler, and vjc to use the J# compiler (now obsolete) If you want to target the .NET Framework 2.0, don't add the v3.5 directory, and if you want to target .NET 1.1, use the v1.1.4322 directory; you still need that v2.0.50727 directory for .NET 3.5 because 3.5 is built on 2.0 (it uses the same version of ngen for example), and it will be the same when .NET 4.0 comes out (and then you will be able to type fsc to use the F# compiler...based on OCaml). Anyway, yesterday I used an Ubuntu machine to compile that Chrono Trigger-related C code (after compiling TCC and then using TCC to compile TCC), and the program still didn't seem to do anything.
i imgur com/QiCaaH8 png
Joined: 4/13/2009
Posts: 431
NameSpoofer wrote:
Mix it up eh? Post some C# and Java lines. I'm currently taking these 2 classes so give me a reason to be on here during class. :P
public class Test {
   public static void main(String[] args) {
     String s = "Java";
     StringBuffer buffer = new StringBuffer(s);
     change(buffer);
     System.out.println(buffer);
   }

   private static void change(StringBuffer buffer) {
     buffer.append(" and C#");
   }
}
<3
Oh, I so hate Java... and C#.
Former player
Joined: 6/3/2008
Posts: 136
Location: US
EEssentia wrote:
Oh, I so hate Java... and C#.
You hate life?! Oh noeee!!
Trained by Cpadolf. Mission: To Perfect. Hero says: Yeah bro, I almost went super saiyan once My SM-RBO Current Project(s): 1)I don't TAS anymore! Pay me!
Joined: 4/13/2009
Posts: 431
o_O No, just nonsensical, inefficient languages...
Emulator Coder
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
NameSpoofer wrote:
Mix it up eh? Post some C# and Java lines. I'm currently taking these 2 classes so give me a reason to be on here during class. :P
public class Test {
   public static void main(String[] args) {
     String s = "Java";
     StringBuffer buffer = new StringBuffer(s);
     change(buffer);
     System.out.println(buffer);
   }

   private static void change(StringBuffer buffer) {
     buffer.append(" and C#");
   }
}
<3
Really now?
import java.util.*;

class test
{
  public static void main(String[] args)
  {
    boolean completed = false;
    try
    {
      Properties py = System.getProperties();
      Enumeration<?> e = py.propertyNames();
      while (e.hasMoreElements())
      {
        String key = (String)e.nextElement();
        String value = System.getProperty(key);
        System.out.println(key + " = " + value);
      }
      completed = true;
    }
    finally
    {
      System.out.println ("Completed=" + completed);
    }
  }
}
Save as test.java To compile with GCJ:
gcj -o test test.java --main=test 
To run:
./test
To compile with JavaC:
javac test.java
To run:
java test
The former is native code, the latter bytecode that runs in a VM.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
Former player
Joined: 6/3/2008
Posts: 136
Location: US
EEssentia wrote:
o_O No, just nonsensical, inefficient languages...
Gah! I can't seem to understand how to play super mario bros! It is obviously an un-playable game that makes absolutely no sense at all.
Nach wrote:
Really now?
The actual purpose of that example was to understand and utilize the string buffer in a simple manner. From class, ofc. Get some. :)
Trained by Cpadolf. Mission: To Perfect. Hero says: Yeah bro, I almost went super saiyan once My SM-RBO Current Project(s): 1)I don't TAS anymore! Pay me!
1 2
6 7 8