Posts

Showing posts with the label functional programming

Are functional languages inherently slow?

Functional languages require infrastructure that inevitably adds overheads over what can theoretically be attained using assembler by hand. In particular, first-class lexical closures only work well with garbage collection because they allow values to be carried out of scope. Beware of self selection. C acts as a lowest common denominator in benchmark suites, limiting what can be accomplished. If you have a benchmark comparing C with a functional language then it is almost certainly an extremely simple program. Arguably so simple that it is of little practical relevance today. It is not practically feasible to solve more complicated problems using C for a mere benchmark. The most obvious example of this is parallelism. Today, we all have multicores. Even my phone is a multicore. Multicore parallelism is notoriously difficult in C but can be easy in functional languages (I like F#). Other examples include anything that benefits from persistent data structures, e.g. undo buffers are triv...

Distinctive traits of functional programming languages

The landscape of functional programming languages is remarkably diverse, with most of the major families having quite distinctive traits and dialects that bring their own quirks. Here are some of the major categorizations: Evaluation strategy : non-strict (Miranda, Haskell) vs strict evaluation. Type system : static (Standard ML, OCaml, F#, Haskell, Scala, C# 3) vs dynamic (Scheme, Lisp, Clojure, Erlang) typing and untyped (Mathematica). Kind of static typing : structural (OCaml) vs nominal (F#, Haskell, Scala, C# 3) static typing. Type inference : Damas-Milner (Standard ML, OCaml, F#, Haskell) vs "local" inference (Scala, C# 3). Destructuring : pattern matching (Standard ML, OCaml, F#, Haskell, Erlang, Mathematica) vs manual deconstruction (Scheme, Lisp, C#). Extensibility of algebraic types : always closed (Standard ML, Haskell) vs optionally closed (OCaml). Pattern matching : linear (Standard ML, OCaml, Haskell) vs unbounded (F#, Mathematica). Run-time code generation : me...

Don Syme on "Functional approaches to parallelism and concurrency"

Don Syme, creator of the F# programming language, recently gave a superb lecture on parallel and concurrent programming using F# at QCon 2010. Video and slides hosted by InfoQ here . Parallel programming continues to be a hot topic in the face of multicore computing but, as Don points out, the world is also moving steadily towards more concurrent programming. Work continues on our forthcoming "Multicore .NET" book that studies parallel programming using C# and F# in detail...

Burton Smith vs Erik Meijer

We were recently led to a fascinating Channel 9 interview where the insideous Erik Meijer walks in on parallelism-expert Burton Smith , culminating in a fight the likes of which have not been seen since Chris Curry vs Sir Clive Sinclair in the Baron of Beef here in Cambridge. A pragmatic Burton points out that lazy evaluation renders performance unpredictable and that, in turn, makes it impossible to tune the granularity of parallel programs to ensure that more effort is spent doing actual work than is wasted administering parallel tasks. The puritanical Erik points out that strictness is essentially another form of side effect because it affects issues such as non-termination. The conclusion is irreconcilable differences between what parallel programming requires and what purely functional programming can provide. Erik and Burton were joined by an elephant in the room though: caches. They are the main difference between supercomputers and multicores and are a game changer, yet peopl...