Designers don't have to use the site, so they probably just try and make it look pretty.
But usability should trump aesthetics. If you're designing a blog, you should design it to be readable. That means a decent font size, no distracting graphics and definitely nothing that moves/blinks/flashes. (I'm biased, I've always preferred reading sites that are "plain." Just text and images only if they actually add value. Some color is ok too)
I'm not sure how someone can claim to be a web site designer if they neglect the basics of usability in favor of making the font stylishly tiny. It's like claiming to be a chef but overcooking everything.
I looked at the source code for that page, and it switches from "12px" font size to a smaller version of that about halfway through. Aside from the issue of specifying font sizes in pixels, why would you suddenly start adding <small> tags to already-small text?
Most blogs probably aren't designed by professionals. Likely Joe Blogger's friend who thinks they have an eye for style and can whip up a wordpress theme.
The small tags are part of the original article that he scraped. Either way, it's odd, the font size get progressively smaller for each paragraph.
This or another post proposed the hypothesis low-contrast, small text etc may look better during the design stage. Remember, the designer is probably working with dummy "lorem ipsum" text. Low contrast will highlight the designers contributions (color scheme, placement of info) and [save him/her the pain of reading the dummy text :)].
That's not the best metric to begin with (I used to think so too, but someone here pointed out that gzipped source is already much better), and besides that, there are other ways of looking at it besides 'size', such as readability.
Lines of code and readability are both surrogate measurements for codebase complexity. People focus on lines of code both because it's easy to measure and because readability doesn't go down linearly as codebases increases - there are discontinuity spikes.
Once it grows past a certain size, you need the right kind of IDE, etc. to cope with the conceptual sprawl. People start duplicating functionality because they aren't aware things already exist, minor details that handle historically recurring bugs get overlooked in the noise, it takes much longer for new people to understand the codebase, etc. A 100k codebase is not just ten times harder to maintain than a 10k codebase.
That's the whole point. Even if you program in a 'sloppy' way the gzip algorithm will see that and reduce accordingly so you are comparing the languages, not the programmers stylistic elements.
Besides, the article talks about implementing the same algorithm twice in different languages, if you went about that in a completely different way then that would of course be problematic. But when comparing the expressiveness between two languages you'd assume the same major structures would be present.
Except that some languages encourage sloppy programming. Haven't you ever looked at some Java code where two (or more!) functions do exactly the same thing except that the types are different? GZip would compress that down to almost nothing, but you still have to deal with the complexity, maintenance burden, and API bloat of having multiple versions.
It also doesn't account for language misfeatures that introduce significant additional cognitive burden for the programmer but don't appear much in source code. For example, manual memory management in C++ results in a bunch of "delete x" calls in code, which would get GZipped down to very little, but impose a very high cognitive cost on the programmer. PHP's inconsistency in argument order doesn't show up at all in code length, but also presents a big cognitive tax that sends you back to the reference manual all the time.
Personally, I think that the best metric of language productivity is "The amount of information you have to keep in your head in order to write code as fast as you can type." Unfortunately, that's nearly impossible to measure.
Doesn't that suggest that such code 'bloat' is actually less important than people make it seem ?
After all, there is not much to remembering that two functions exist for the same type, it's not like you have to remember that there is something 'completely' new, just a routing that is almost the same as another.
If you're right, then there isn't much value in abstracting over repetitive patterns. I'd say that is pretty decisively contradicted by the history of software. Why have functions at all then?
Because it saves space and because it makes maintenance easier, and it allows you to label a block of code with a meaningful name.
Caching also means that if you execute less code you will have better performance.
The actual instructions can be interpreted serially, in fact when you're 'desk checking' that is exactly what you do, to unwind the way the code is written to the serial stream that the processor is executing.
Why do Gzipped source size is better? I mean, if with the same number of LOC, Gzipped size is different, that means one program is more repetitive than the other, right? I suppose experiments were done to show that Gzipped size was "better" in some sense?
I don't think you can objectively measure readability. It is tightly coupled with concrete syntax, which is highly subjective.
Because gzip (or any other text compression protocol) tends to reduce the size of the file without reducing the information content. So for instance, if you used longer variable names but the same overall structure or if you split single lines up in to multiple lines (or vv) the gzipped bytecounts would be relatively close in spite of huge differences in source size.
Would the minimal # of lines of readable code be a useful metric? It's not an easy to evaluate one as you have to actually read the program, but a smaller program should be more understandable than a larger one if they both have approximately the same information density.
Interesting article. Over time most of lisps features (generic functions, garbage collection) have been ported over to other languages save two - syntactic abstraction and metaobject protocols. On syntactic abstraction there seems to be genuine difference in opinion between lisp (scheme) and other languages. Lisp is on the side of allowing programmers to invent their own syntactic abstractions hence making their programs shorter and at times more understandable. Other languages have largely been unwilling to make that leap of faith because syntactic abstractions dont scale up with the number of programmers. They also make it harder to reason about the correctness of programs and make tools like profilers, debuggers, code steppers. So this continues to remain a genuine point of difference which means the lisp programs will continue to have the whizbang sleek look which other languages will find hard to emulate. On the other hands lisp programmers will continue to look in envy at the development tools of other "less featureful" languages.
syntactic abstractions dont scale up with the number of programmers
... as an objection to Lisp macros. It has become the invariable staple of these discussions and is accepted (by the people who accept it) without any question. Yet is there any evidence for it? I don't think I've ever even seen any evidence alleged for it.
The fact is, to justify this statement requires more than proving it's true. You'd have to prove that this failure of syntactic abstraction to "scale up with the number of programmers" is worse than the failure of abstraction in general to do so. Indeed, in my experience, it's programming that fails to scale up with the number of programmers.
I dont know what you count as evidence but here are two eminent authorities speaking about related topics -
Turing award winner Barbara Liskov -
http://www.infoq.com/presentations/liskov-power-of-abstracti... (Syntactic abstraction related talk starts at minute 17).
Heres what Dijkstra has to say about something that is tangentially related (he is talking about the importance of read time comprehensibility of code).
"My second remark is that our intellectual powers are rather geared to master static relations and that our
powers to visualize processes evolving in time are relatively poorly developed. For that reason we
should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap
between the static program and the dynamic process, to make the correspondence between the program
(spread out in text space) and the process (spread out in time) as trivial as possible."
This is from the influential GOTO considered harmful paper.
"Lisp has jokingly been called "the most intelligent way to misuse a computer". I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."
Not to take anything away from Dijkstra and Liskov, but of course I don't count their opinions as evidence; eminent authority opinion is what one resorts to in the absence of evidence.
I don't have time to watch the video, but it seems rather obvious that the Dijkstra quote has little to do with macros. What he's saying sounds to me like "lexical scoping is easier to understand than dynamic scoping". If anything, that makes macros more valuable rather than less, since macros are intrinsically lexical and macroexpansions don't typically change over time.
Dijkstra's quote sounds more like an argument for defaulting to immutability than anything about syntactic abstractions. (I happen to agree with him, there.)
"to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible". Most programmers read programs as plain source most of the time, ie not as macroexpanded source. If editors showed macroexpanded source code by default, things would be different. Also macroexpansion does in practice affect the ability of the compiler to highlight what line caused the error, or the ability to step through code intuitively (you are stepping through macroexpanded code and sometimes its tiresome to be able to intuitively understand how that relates to code that you wrote).
Let's assume for a moment that the critique is correct: macros don't scale. OK. The C++ templating system can be used to accomplish many of the same things as a good macro system (see http://matt.might.net/articles/lambda-style-anonymous-functi... for one example). Does the C++ templating system "scale"?
Or, to put it another way: lots of C++ shops ban the use of advanced templating metaprogramming. Has C++ suffered egregiously as a result?
C++ templates are both less powerful and more complex and fragile than lisp macros. But no one ever says "oh, the C++ templating system does not scale" or "oh, C++ will never be successful until it eliminates/replaces the templating system"...do they?
Prolog also has syntactic abstractions, though it doesn't have a culture that uses them to the same extent because (overgeneralizing a bit) every expression is quoted by default, and it also has extremely powerful pattern-matching. That (or good support for lazy evaluation, in other languages) covers many of the same use cases as Lisp-style macros.
Also, K has a "whizbang sleek look" that makes Lisp look ridiculously verbose. (http://www.nsl.com/papers/kisntlisp.htm) Ditto J and APL, though K seems to take it furthest. Part of their density comes from being based on ideograms, not lexical tokens; the density is like comparing Chinese to English.
Unfortunately, K is closed source and limited to 32-bit platforms for non-commercial evaluation. It's also quite expensive, since its primary product is a mind-bogglingly fast database designed for real-time stock data analysis. I'm been fascinated with K for a while, though, albeit mostly at a distance. (J's evaluation isn't limited to noncommercial use, though.)
K, J, Lisp, Prolog . . . I'm beginning to be worried about my own fascination with uncommon languages.
Is there a support group available? If the three books you read when the kids are asleep are The Reasoned Schemer, The Art of Prolog, and On Lisp, are you now officially beyond help on the language geek scale? I used to be a somewhat productive programmer. These days I think more about making sure I've picked the right language for the problem . . .
I spent a while chasing the weirdest languages I could find (relative to everything else I knew at the time), but feel like I've gotten to the point of diminishing returns. I've been studying types and semantics instead.
I do most of my actual programming in Lua + C and an in-house language. I really like Erlang (but haven't worked on the right kind of projects lately), and I also prototype stuff in Prolog.
I would love to have a more modern dialect of Prolog, designed primarily for embedded use as a C library (much like Lua, and ideally with a similar C FFI). It's a cool language, but it's also really strongly skewed towards certain problem domains. Lua has taught me that when a language can delegate its weak points to a symbiotic language, it can stay really small and focused on its strengths.
There's a lot of stuff about it at http://nsl.com, and you can get an evaluation version of Q (the newest K dialect) and associated documentation from http://www.kx.com.
Well reasoned for the most part, but Lisp has occupied the high ground for development tools for a long time (REPL, SLIME, trace/untrace, step, describe/inspect, room, macroexpand). Other languages have been playing catch up in this regard as well. Whether Lisp still occupies the high ground today is unclear, but it is erroneous to say that Lisp programmers envy the development tools of other languages.
Good point about slime (its my development environment of choice). Lisp tools are built for dynamic development from the ground up, so on that aspect they are very good. However finding a lisp source stepper that works has been an exercise in frustration for me. The best one ive found so far has been the PLT scheme stepper that mostly works.
With regard to development tools, maybe he was referring to libraries and community support.
Perl, a slow language criticized for its syntax and bearing the stigma of being a mere scripting language, is often lauded as having huge community support and tons of helpful (even if often crappy) libraries via CPAN. C++ has similar support (no CPAN that I am aware of, but I have always quickly found solutions and libraries with the search engine).
Granted, CL can connect via U/CFFI to any external library, but it is harder to find the wrappers for CL than C/C++ header files.
I think there's one other important lisp feature that hasn't been widely adopted: compile-time metaprogramming. A lot of lisp macro use boils down "allow developer to generate lots of new code based on tiny fragments of code he writes." The Python community has adopted this to good effect, but the Java/C++ community really hasn't. The funny thing is, the need for good code generation doesn't go away simply because the language provides no facilities for it. That's why C++'s broken template system is so often abused to provide compile-time metaprogramming. And that's why Java added annotation factory support. The preferred use for annotation factories is to register a factory class that the compiler will call when compiling code that uses your annotation. Your factory can then access a special compile time API that allows it to create new Java source files based on the AST that the compiler feeds it. More sketchily, your factory can modify the AST in place. This is an insanely baroque way of getting compile time metaprogramming but it solves a real problem.
I dunno what life inside other big companies is like, but Google uses compile-time metaprogramming all the time. There're tons (perhaps too many) "little languages" used inside Google. Some have even been open-sourced, eg. protobufs, google-ctemplates, Closure Compiler. And if their open-source output is any guide, FaceBook uses tons of little languages as well.
It's just that they're done the UNIX way, with a small miniature compiler or XML substrate, instead of the Lisp way, where you write everything in S-exprs. This tends to make the little languages easier to grok for users but harder to implement, which tends to be a good tradeoff, as there will be far more users than implementers of a programming language.
Sure, you can totally do compile-time metaprogramming without language support, but the barrier to entry is much higher. That means that, at the margin, you're less likely to try it, especially for small projects. And most projects start small.
I totally agree that good development shops do lots of compile-time metaprogramming. The issue is whether they do enough when dealing with problems (that start out as) too small to justify writing a full blown parser/compiler, and if they don't, how much of the shortage is attributable to a lack of language support in C++/Java.
I have been trying to understand the problem with syntactic abstraction in other languages -- or creating your own languages to fit the problem. Is it a real issue? or is the problem with it just same problem people have with any unfamiliar codebase where the developers made unclear or poor abstractions?
To clarify what I mean, I have presented lisp ideas to fellow coworkers and met with resistance along two lines:
(1) They want to understand what is really going on, and the abstraction can obscure that. This sounds to me like a general abstraction problem more than an issue with syntactic abstraction... and really, it seems like a problem with understanding how someone chose to abstract things. They want to look underneath the hood for answers.
(2) They suggest that new developers will have a hard time understanding and adding to the system. But new developers for any project have to learn the local language anyway. It may look like C++, python, or whatever, but it the framework other developers developed always has its own learning curve.
Surely syntactic abstraction is just another abstraction tool that can be used to help make those frameworks easier to dive into. There is always poor code and poor documentation; this is not an excuse not to declare functions or other pretty approaches. Why is it any scarier to use this tool than to use someone's library?
There is a small difference - syntax affects the read time comprehensibility of code in ways other abstractions dont. For eg in C++ an overridden "+" operator could be doing anything in the background and there would be no way for you to know by a casual read of the code. Lisp has macroexpand-l for this, but one needs to be dilligent enough to use it on new pieces of code.
Since in C++ the + operator is actually a class method in C++, it sounds like all abstraction suffers the same potential problem of "no way for you to know by a casual read of the code".
That is a good point on the macroexpand functions; I would hope that if syntactic abstraction was added to C++, there would be a similar way to view the resulting code (sounds like a great future gdb / MSVC debugger feature).
I have found that while I mostly understand C++ as a language and respect it for what it tries to accomplish, the development environments are absolutely atrocious in many ways, and I spend more time fighting them than writing code. This has made me conclude that even though I could in theory program in C++, it is waste of human resources to do so with the current generation of development tools.
Ocaml also allows syntactic abstractions nowadays. Of course it's more complicated to use than in Lisp, because the syntax of Ocaml is more complicated in the first place.
To some degree, I feel that syntactic abstraction is a premature optimization. Many (most?) benefits of macros can be replaced by a combination of lazy evaluation and functional programming. Most of the performance benefits can be obtained by a sufficiently smart compiler which does partial evaluation. I use macros in lisp a lot, and I often wish I had them in Python/Java/C++.
In Haskell, I can only think of one very narrow use for them which has never actually come up for me in practical use (automatically generating instances for combined monads).
Out of curiosity, can anyone provide a more common use of macros which can't easily be done in Haskell with HOF/Lazy evaluation?
The only things I can think of are things that wouldn't enter the head of a Haskell programmer to begin with, such as syntactic shortcuts for complicated side effects.
So you're right, lazy evaluation can do just about anything in a purely functional environment. But now count the number of programming languages that provide a purely functional environment...
...such as syntactic shortcuts for complicated side effects.
I can think of a couple of examples where I've done such things in lisp (mostly elisp, sometimes Clojure), but I don't see a good reason why I couldn't do them in Haskell. In fact, a function f: a -> M d (with M = IO, Writer or State if you want side effects) almost serves the same purpose: it turns a value into a set of actions to perform.
I'm really curious, could you give an example of such a task/macro?
(Note: I'm not trying to engage in a language war, I'm honestly curious. If I'm treating Haskell as Blub, I'd like to know about it.)
"Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp."
- Philip Greenspun
"Including Common Lisp."
- Robert Morris
"We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."
I have to agree that even thought STL is extremely powerful, it makes program extremely verbose.
But, in my opinion, the real reason C++ code seems so verbose is because of the lack of closures. In C++, when you need one, you have 2 choices: either manually coding everything that would be in the closure or create a totally new function.
In fact, in the new c++2k10, it could be interesting to have a STL-like but with closures instead.. so one can do:
some_sequence.sort_by(_1 > _2);
or: foreach(sequence, cout << _1 << endl); that's, again in my opinion, way better than:
for (int i = 0; i < sequence.size(); i++) { cout << sequence[i] << endl; } and again still better than:
for (list<int>::size_type i = 0; i < sequence.size(); i++) { etc..}