Why Object Oriented Programming Is Bad

I’m not a fan of Object Oriented Programming. (O.O. or Oop) I make no bones about it. I’ve learned it just enough to do it and know I don’t want to do it. I’ve managed major projects written in it. So it isn’t like I’m rabid about it or like I’m inexperienced with it, or have not been proselytized to about it.

It is just that I find it hard to do for no good reason (no real gain) and obscures the fundamental actions being done by any bit of code. Ideology is NOT a good reason to make your code more obscure, more fractured, written with variables names a sentence long, or take a simple procedure and squash it into a dozen different conceptual boxes just because you like boxes.

Over the years I’ve felt a bit more alienated as ever more of the programming work done moved to OO and especially the rise of Java and web site programming being largely all that way didn’t help. At times I’d wondered if I was just too set in my procedural and functional ways to “make the leap”. Yet I’d written some test programs in Oop. I had the concepts. I just didn’t see where applying them made sense.

Then this video comes along. Done by a guy with far more Oop chops than I have. Laying out in fine grain detail just where and why OO fails to deliver on the promise. About 1/2 of it is the stuff I’d intuited. Where I just didn’t see where it was delivering what he says in fact it doesn’t deliver. The other 1/2 is more of a technical & theoretical examination of where it falls flat. Beyond anywhere I’d gotten to. Yet having it said, I see it.

The forced data types. The need for “refactoring”. The way Oop causes problems in intermodule communications. Things I’d seen “at a distance” but was unsure why. He makes clear.

For anyone who is a programmer, this video will be important for you. For those of you who have no idea what programming is about, it will likely be a bewildering string of sentences that don’t mean much full of programmer jargon about something you don’t know exists; but it might give some sort of feel for what programmers argue about at the Java Hut ;-)

A bit long at 44 minutes, but it takes that long to cover the turf. He moves fast and there isn’t a lot of “fluff”; it is information dense and doesn’t waste the time:

That we’re down to (his claim) only 5% of programmers will talk dirt about Oop shows how pervasive it has become in the programmer culture. It now dominates the Academic world. In fact, I’d assert the profession of programming did not embrace Oop so much as the Academics did, and after a decade or two, that was what most graduating programmers believed often without experience on which to base that belief.

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , . Bookmark the permalink.

23 Responses to Why Object Oriented Programming Is Bad

  1. Power Grab says:

    I will watch the video. I haven’t yet. But I want to say that I, too, prefer to work with procedural programming tools rather than OOP tools.

    When I stopped even looking into the newest generation of my tools, it was because the main goal seemed to be to force programmers to create Windows-compliant user interfaces.

    When I tried to convert to OOP, it seemed that I spent most of my time putting a pretty face on my programs, as well as trying to find a niche to stick in my own code.

    I could see my being forced to re-write programs every couple years or so, just to remain up-to-date with my tools. That seemed like a huge waste of time. My main need is for something that can deal with tables with as many as a million or more records without choking, and do the job FAST.

    I also am concerned that today’s tools are just toys that force programmers to “string black boxes together”. It bothers me that any given Windows patch might break one of my programs, and I would not have a clue where it broke. :-P

    With my procedural tools, I can proudly say that I have programs that I wrote in 1999 that I still am able to use today. :-D

    Whenever my department has to wait on programmers in the main IT department to do custom development using whatever the Latest-And-Greatest-Tools are, it takes many months (or years) before they get it done. And when they think it’s done, it doesn’t usually work right and takes many weeks/months for them to figure out how to make it work right. Of course, part of the problem might be that today’s newest systems are designed to restrict all developers/users to their own prescribed playground. I’ve had to work with an analyst across the country whose access is hugely different from mine.

    Example:

    Analyst: “Submit your job on FOOBAR.”

    Me: “What is that, and how do I get to it?”

    They finally figure out I don’t have access to whatever FOOBAR is, and tell me to request it.

    I ask how to request it.

    Eventually we go through several iterations of attempting to get access, approval, etc.

    I ask my old Go-To Guru for advice and get some useful details.

    When I finally try to submit the job, it refuses to run. Oh, whatever access they granted me doesn’t include actually running jobs.

    Rinse and repeat, ad infinitum.

    It didn’t used to be this hard.

    I keep wondering if part of the problem is that today’s programmers have no earthly idea what the actual users need to do their job efficiently and accurately. Every web-based thing they make us use is always more obtuse and slower than whatever local client version it replaces. They always lack some functionality that we used to depend on in the local client version, too.

    Here is an example:

    With our most recent “upgrade” to the organization’s mission-critical accounting software, our department’s head bean counter no longer has the ability to call up payroll numbers. Yet he is still required to cobble together a budget for each year. Apparently, he just pulls numbers out of the air (or somewhere else!) for the payroll part of our budget.

    For that matter, even our HR staff have to struggle to maintain a current list of all our department’s employees, titles, pay rates, etc.

    I’m sure this has to do with trying to prevent employees’ identities from being stolen. But I ran into an article this year that spells out how all the new security techniques are actually making it harder for the security people to detect and deal with break-ins.

  2. E.M.Smith says:

    @PowerGrab:

    On one occasion I attended a Cray Users Group meeting. I found out our shop ran a Cray site with 40 people (and that included the secretary and tape hangers…) and the next nearest to us in small staff size was 200 staff.

    Part of “how” was that ALL my staff had Root Access. If anyone needed to get something done, they could just do it. ( I also told them they were empowered to “get it done” as long as they didn’t screw up too badly… screw ups were acceptable just not catastrophic screw ups.)

    It is fairly easy to monitor the root use of 4 systems programmers and a dozen computer operators, along with 2 managers. You don’t need dozens of “control systems” and lockouts and approvals processes when everyone in the shop knows what everyone else is doing.

    During my time we had NO breaches of our secure side security. Only incursions where to the Honey Pot.

    I doubt I could achieve that record today. Microsoft and Web Browsers make it impossible to be really secure. Legal Compliance mandates doing so many stupid and complicated things you can’t do the simple one of: Pick really good people you can trust, and trust them. Build the culture right.

    Oh Well, not my problem anymore. Now it is just my own “shop of one” I need to worry about. Oddly, everyone in it has root access too ;-)

    I still sporadically write code, but it is basically down to Shell (various almost the same variations on sh / ash / bash /…), FORTRAN, C, and the odd bit of whatever language needs a bug fix if something breaks – but I’ve not need to do that for a few years (Python for GIStemp was the last one of those a few years back). I’m torn between C and Fortran as my favorite. (FORTRAN being the older f77 style and Fortran being the newer revs of the language). They both just let me get the job done fairly fast and in a straight forward way. Not a lot of fooling around with the language.

    Since much of what I’ve done lately has either been “Systems stuff” or bulk processing of blocks of temperature data; those two languages are just about ideally suited (all wrapped in shell stuff for JCL).

    Frankly, it had been a good 30 years between my use of FORTRAN and needing to dust it off for GIStemp. I was pleasantly surprised at how much of it was still in my brain ;-) and at how easy it was to “just do it”. Folks “diss” the FORMAT statement, but I love it. Got a flat file? Just whip out a FORMAT statement to match, suck in the data, and go! Not much fooling around needed. I can do fixed format read / write in C, but it just feels so much like a kludge. Invoking functions and parsing strings long hand? Really? Just give me the damn float from columns 4-12 damn it! ;-)

    Oh well. Such is the way of the Fossil Programmer ;-) (And I don’t mean the version control sys.)

    Frankly, part of why I’ve come to dislike Centos (and RedHat) is the creeping enforcement of compartmentalized powers. If I only have 3 hours to do something on a system, I don’t want to spend 2 of it granting myself permissions in Kerberos!

    There comes a point where the S curve (Logistics Curve) of complexity is in the exponential growth phase and if you have linear time available and complexity in the exponential, your systems ossify and grind to a halt. I’m pretty sure much of programming and most large computer shops are now firmly in that condition. KISS died a couple of decades ago…

    Oh Well.

    Maybe as Moore’s Law runs out we’ll have a renaissance of tight efficient code. When you look at what the early Mac did on a 68000 CPU it is not that different from what is done today on quad core Intel Macs. How did they do it? Incredibly hand optimized assembly code in ROM. And not that much ROM either. Sure, it took a few thousand programmer hours to make that ROM, but the payoff was millions of user hours for a decade+ ( I still have one ;-) so clock not stopped yet…)

    When you look at what was done in K of memory on Mhz machines with 16 bit (or 8 bit!) words and then someone says G of memory on Ghz machines with 64 bit words is not enough… well, it just screams at you that the method of coding is very wrong. You don’t lose 4 orders of magnitude of hardware performance without doing something very wrong… IMHO, Oop is one of the things that’s wrong.

  3. Ian W says:

    “When you look at what was done in K of memory on Mhz machines with 16 bit (or 8 bit!) words and then someone says G of memory on Ghz machines with 64 bit words is not enough… well, it just screams at you that the method of coding is very wrong. You don’t lose 4 orders of magnitude of hardware performance without doing something very wrong… IMHO, Oop is one of the things that’s wrong.”
    So true.
    The overhead of servicing the system is preventing the actual processing required to carry out what the system was purchased for. In the attempt to hide the machine from the programmer, layers of complexity have been added.
    As far as OO is concerned the problem is that the idea was hijacked by software ‘engineers’ as a supposedly clever way of reusing software; that was not the intent of the Booch and Rumbaugh era of object orientation. The intent was that the ‘object’ in code represented an object (not a process) in the real world and therefore as real world objects do not change hugely, the code would be stable. From the abstraction ‘Vehicle’ objects like cars, ships and aircraft could be created which inherited some aspects of ‘Vehicle’. But I found the battle was lost when ‘journey’ became an object rather than an attribute of the object Vehicle, then the processes to implement the ‘journey’ became ‘objects’ and before you knew it an entire esoteric programming approach was born.
    But then I did most of my programming at effectively machine level in BAL and JOVIAL and then various flavors of RSTS, RSX11M and TAL (where I had to show programmers that locking things into what was a virtual registry actually reduced performance).
    I would think that programmers working with an operating system that was totally minimal and reprogramming applications without all the fol-de-rol of multiple layers to ‘make things easy’ using languages that can directly access the hardware, could probably re-start Moore’s law for another decade.

  4. Steve Crook says:

    Can’t say I agree with some of his core arguments. My history first. Started programming during the 70’s in COBOL on mainframes, then C, C++, Java C# and too too many scripting languages and OS’s to recall.

    I can’t agree with a lot of what he’s saying. Most of his criticisms of OO are just criticisms of bad OO design and programming not fundamental flaws, any more than C pointer maths is a damning criticism of C or procedural programming.

    My view is that there are good programmers and bad programmers, and that the good ones will write good, comprehensible, extensible code in whatever language you give them if they have the time, space and appropriate tools.

    The IDE I currently use makes it easy to move code around, extract and merge methods, rename classes, methods and variables, add comments & doc comments. I can find who inherits from whom, who overrides virtual methods, and where methods are called. Generally, I don’t think in source files any more, but by class, method and variable which I can easily find through a browser. It does lint checks as I type, colour codes, even spellchecks comments. Compared to what was using during the 70’s through to the early 90’s it’s at least an order of magnitude easier to write good code now than it was then.

    I remember just how awful C was, how easy it was for unexpected consequences to arise, the barmy naming conventions to try and keep try of type, state and ownership, just how bad and difficult to maintain it was. Even the relatively simple task of renaming a method could take half a day or more in the large system. It’s easy to see why OO became a thing…

    Availability and ‘novel’ language design (looking at you Smalltalk) had slowed uptake. C++ was what gave it a kick in the pants because it was familiar. Java was the antidote to C++ operator overloads, multiple inheritance, and all the nasty crap that was kept to pacify C programmers. The bytecode and simplified syntax made it possible for UI’s to support refactoring and syntax highlighting.

    In Java I make use of static helper class methods to do the stuff that’s more appropriately done procedurally. My class hierarchies tend to be broad rather than deep, reflecting the problem I’m trying to solve. I don’t remember the last time any base class had anything more than a grandchild and I’ve never had issues with single inheritance. I don’t expose any more class state than I need to at the time I create the class and generally avoid ‘setter’ methods, class state can only changed by methods that are owned by the class. So it’s clear where the chain of ownership is.

  5. philjourdan says:

    I was a programmer/SA last century (literally). But it was around the time that OOP became the new buzz word. OOP for the system you are creating is great. You write one object and then reuse it throughout the system. But I never could get it to become trans-system as each system had its own requirements and since 2 systems are not identical…..

    So no, OOP is not useful. I tried it and found I had to heavily modify each object for each system. Which means I took the idea of the use of the object only from one system to the other,

  6. cdquarles says:

    I will also watch it. I got into computers near the beginning of the micro-computer age. There was no computer science back when I took my first classes. There was a big machine in the basement of the Mathematics Department building, ministered to by a minor clergy. I keypunched decks (FORTRAN IV/66) or used a dumb terminal (BASIC). Later, I got to work with an Apple II+ (pathology basic science lab, which happened to be a cholesterol one) and a VAX (obstetrics). So, I was an early on person in medical computing applications, which included a 6 week tour of external sites doing “MIS” for medicine. Give me BASIC/FORTRAN or any other procedural Algol-like language, please. I taught myself 6502 assembly. I eventually learned a bit of Intel x86 assembly. I can deal with C, but I don’t like it that much. There’s nothing wrong with the language, itself, so I guess it is a matter of taste and that I wasn’t doing operating systems or peripheral drivers for them.

    Others who were into it more than I was can say more; but that OOP was from academia, by academia, and for academia’s goals sounds right, to me.

  7. corev says:

    My own experience WAYYY predates most high level languages doing real-time programming in support of the Apollo an even earlier space programs. We used some dated and excessed (no longer needed) Univac 1206 and 1218 Navy puters.
    (Word Length 30 bits

    Speed: 9.6 microseconds add time.

    Primary Memory: 32,768 words core memory (3.6 microseconds access time)

    Secondary Memory: Magnetic drum and magnetic tapes,

    Instruction Set: 62 30 bit, single address instructions.

    Architecture: Parallel, binary, fixed point arithmetic, 7 index
    registers, 1 accumulator register, 1 free register….)
    http://ed-thelen.org/comp-hist/univac-ntds.html

    We needed to rewrite early assembler compiler and Fortran math functions to keep our short term data collection and processing cycles within 25, 100 and longer millisecond routine execution time frames. Yes, milli and not micro second time frames.

    With these boat anchors we tracked man’s flight off the planet, tracked and calculated mission capsule orbit insertion and re-entry even through the re-entry blackout period due to high heat signal blocking.

    Core memory, paper and metallic tape storage and operator control through the register bit setting keys on the computer panel.

    As E.M. noted about SBCs and super computers the calculators of 2 decades ago were more powerful than what we used as small room sized computer centers. Times have changed.

  8. Spetzer86 says:

    I enjoyed the OO video as I’ve never enjoyed OO. I was struck by something peripheral to the discussion, but relevant to the argument. OO is just like the management system of holacracy. If you’ve not heard of holacracy, consider yourself fortunate. But the overall OO discussion in the video covers so many of the inherent problems with holacracy, it makes it clear why the developer of holacracy had to be a programmer.

  9. John F. Hultquist says:

    Started with a slide rule, then Friden mechanical calculator, then FORTRAN II, on an IBM 1620 and after another few years I took a fork in the road that led away from such things.
    I did continue to use computers, and likely cussed a few times at folks like E.M. and others here. First home unit was Commodore’s VIC-20.

    The current Intel situations sounds a lot like Xerox, Kodak, and now Sears.

  10. Larry Geiger says:

    Steve Crook, what is the actual performance of your programs? You say that you like all of the overhead, but how well does it work? Programming is not for the programmer but for the user. Just curious.

  11. llanfar says:

    I’ve been a software dev since the early 80’s. Mainly Java since ‘96, moving over to JavaScript/TypeScript in June. Ias with @Steve Crook, I took issue with his conclusions. Much of the solutions he proposed are what I consider best practices. When I mentor people, I always teach as a purist, but encourage people to implement practically. Best practices is a biggie for me, especially when dealing with off-shore developers. I have been getting more and more frustrated over the years by Scrum (the current wunderkind for “fast and unwieldy”); it has caused huge amounts of tech debt to pile up as devs never move past step 1 of “Make it work. Make it fast. Make it right.”

    Nice story on the ‘96 Java: one of the other devs on our team was a knowledge engineer that was writing Lisp when I was born in ‘62. He had written a rules engine in Java (NARL – Nodes and Arcs Representation Language). It was too slow on the AIX implementation. So I ported it to C++. He joked that the knowledge engineers and OO engineers took 2 separate paths… he likened his group as people that wore jeans, while the OO crowd wore coat & tie.

  12. Power Grab says:

    @ cdquarles re: “There was no computer science back when I took my first classes. There was a big machine in the basement of the Mathematics Department building, ministered to by a minor clergy. I keypunched decks (FORTRAN IV/66) or used a dumb terminal (BASIC). ”

    Sounds like my first experiences with computers. My first exposure to computers was a course in FORTRAN. I loved it!

    Then I had a course in Assembler. I hated it. But always admired people who could make it sing and dance!

    I also had COBOL, but I preferred FORTRAN. I guess the thing you start with is always your most comfortable thing.

    I remember when there used to be many word processing programs in use on personal computers. When it came time for me to buy my own, I saw an article (or maybe it was an ad?) that compared the front-runners. The chart they included must have been honest. It didn’t pinpoint the advertiser as the best product, but WordPerfect.

    When I learned that WordPerfect was originally written in Assembler, and when it seemed lightyears ahead in performance and flexibility compared to the others, I was sold. I always thought it was a shame that Microsoft and Windows products overtook them eventually in sales. Microsoft Word seemed really kludgy to me. The user interface made you think you had done an edit right, but as you worked on down the page, you could see that that complicated edit wasn’t really nailed down.

    At least with WordPerfect, you could deal with codes and know where styles started and ended.

  13. John F. Hultquist says:

    We went with Volkswriter, a 1980s-era word processor for the IBM PC written by Camilo Wilson and distributed by Lifetree Software. It worked, but did not evolve and last.

  14. jim2 says:

    My shop chose C#. It wasn’t my personal choice. But I can tell you, it isn’t an “OOP” tool. You can design a program OOP-wise or just put all the code that does everything in one class. It’s really up to you. I do OOP and I’ve come to like it. I started out with Applesoft and Visual Basic, not very OOP at all. I understand why you might want to write a drive in assembler or C. But for maintainable code, object oriented does a decent job. I don’t like or use inheritance much, although some have in some of our bodies of code. I like small classes that do one basic thing well. There are more advanced design patterns these daze than inheritance and polymorphism.

    This book contains a sampling of design patterns and a lot of what we do uses these and SOLID got me started with OOP.

    If you program to an interface, you can do things like this with an object.

    Lets say we have an object that generates a bill with a sum and we want to either save it to a database, write it to a text file, or print it.

    We define an interface IWriteIt(string[] lines)
    writeIt(lines);

    The object that creates the bill is void Bill(IWriteIt writeIt)

    So you create an object to save to a DB

  15. jim2 says:

    darn it, hit tabs

    Object to write to a DB
    WriteToDB(string[] lines) class implements IWriteIt
    (code to set up db)
    method writeIt(string[] lines);

    Object to print
    PrintItToDB(string[] lines) class implements IWriteIt
    (code to set up printer)
    method writeIt(string[] lines);

    Then you can drop in whichever write class you want into the Bill class to change its behavior. And if you use something like Ninject to instantiate the classes, you don’t even have to worry about newing up an object and disposing of it.

    Anyway, that’s what I’m up to :)

  16. jim2 says:

    Remove the (string[] lines) from the constructor of the writers – my mistake.

  17. jim2 says:

    OK, so I’ve watched part of the video. :) I’m not sure where he gets the idea that objects communicate by messages. As you can see in the example above, the writer objects are passed to the Bill object. The writers don’t give a damn about the state of Bill, they just expect a writeIt method. Likewise, Bill doesn’t give a damn about whatever state is managed by the writers – and that might be consequential state like DB connection maintenance, setting up a printer, etc. Bill sees none of this. In fact, other than passing an array of lines, they don’t know squat about each other, other than the common interface. Back to the video.

  18. jim2 says:

    Finished the video. I don’t have anything against what he is proposing. His method seeks to minimize the evil posed by global variables, and make the code more self explanatory via functions enclosed in the class, which C# will do just fine.

    I agree with him that learning to write OOP has a steeper learning curve than procedural. It took me a good while to get used to it. But IMO it does result in easier-to-maintain code. If Bill isn’t printing right, I know just where to look to fix it.

    Also, I was a bit amused by his strict “god” object design. One of our code bodies deals with a big process that needs a lot of disparate functionality. It would be downright stupid to have one “god” object, other than “main”, which isn’t much of an object at all. We have multiple points of entry to our body of code, and it’s all OOP with dependency injection. I’m sure that would blow his mind. :)

  19. Steve Crook says:

    @Jim2
    > I’m not sure where he gets the idea that objects communicate by messages

    It’s a Smalltalk thing. But also message passing is a common elsewhere in multi process/thread environments or UIs with event queues (mouse, keyboard etc) with events sent to objects that want to handle them. But really, a message is just a function call completing synchronously or asynchronously.

    But I never had problems doing this cleanly with OO code. Particularly in Java where I used to use interfaces/anonymous inner classes to handle events. These days I use lambdas or method references where I can.

    I can remember starting out on the whole OO thing in the early 90’s and struggling to make the connection between the problems I was trying to solve and the rather dumb examples, but eventually it all started to make sense. Particularly in Java.

    When you think about it, from a C perspective, an object is just a struct that includes a bunch of typedefd method pointers that deal with the structs data. It’s the sort of thing that we were all building in the ‘good old’ procedural days trying to make sure people accessed data in a formalised manner. Inheritance is only a new struct that incorporates an existing struct and has some extra method pointers of its own. But all wrapped in some nice syntactic cake to make it easier to digest.

  20. jim2 says:

    OK, it wasn’t clear to me he was referring to events. Events can be created and used in C#. It would be difficult to run asynch processes in a tightly controlled manner without them.

  21. Steve Crook says:

    @Larry Geiger
    > Steve Crook, what is the actual performance of your programs?

    I’ve written a lot of OO code on systems ranging from low power single CPU PCs up to 32 core multi process, multi thread systems. UIs and server code distributing financial market data and pricing bonds, so it had to be responsive, scalable and to perform well under heavy load.

    Performance was never an issue because it was always an issue, something we considered during development and was part of the test cycle. We were lucky to have a guy who was particularly anal about such things and he kept us on the straight and narrow.

    My view was always that it might be possible to make a system run faster, but there was always a threshold where it was fast enough that users wouldn’t complain about performance on their target hardware and at high load. We didn’t have to jump through hoops to achieve the required performance (memory and CPU), even on relatively modest hardware.

    Like I said in an earlier comment, good programmers design and write good code. My view is that OO makes this easier not harder…

  22. hubersn says:

    The video leaves me totally confused. I don’t get the point Brian is trying to make. He jumps all over completely different and separate points, blames OOP for general IT problems, blames bad habits on OOP, and his “procedural way out of this mess” is laughable at best, because he does not demonstrate at all that it is the better way once you leave the realm of a few hundred lines of code. The problems he describes with “shared state” and “god objects” and “how to structure data and functions on that data” have really nothing to do with OOP and are generic IT problems. It is easy to get into a mess if doing it wrongly, and that does not depend if you end up in a mess of objects or in a mess of procedures.

    The way he differentiates between “Abstract Data Types” (which he considers good) and “Objects” (which are basically abstract data types, because they encapsulate the data along with the operations allowed on this data, which is pretty much the definition of an abstract data type) looks to me like he has not really understood what OOP really means.

    I agree with him that, when Smalltalk started to try out a few OOP ideas, inheritance was completely overrated as a concept. However, there are problem domains where inheritance is a good fit (e.g. UI programming, which he seems to acknowledge – but without then going further to recognize that different problem domains may have different abstraction needs and that OOP might be a good solution for at least some real-world problems.

    He does a lot of strawman bashing and holds OOP responsible for a lot of problems IT industry faces today, but which are in reality a programmer/developer/engineering/people/education problem. Every tool can be abused. I have seen procedural spaghetti code of the worst kind, but that does not lead me to damn procedural programming. To me, he comes across like a very narrow-minded guy who probably had the misfortune of never having seen problems elegantly solved by OOP.

    My software developer journey started with a simple BASIC dialect, a good amount of assembler (back when compilers had laughable performance and I tried to do useful things with a 4 MHz Z80 in 42 KiB of memory), Ada 83 (which is a very good OOP introduction because it does not provide inheritance and polymorphism but still basic OOP concepts like well-defined visibility, scope and fine-grained access control for proper encapsulation, and generics). My first “full OO” language was Ada 95, followed by C++ and Java. After more than 30 years of software development, I still think that OOP is the best general purpose programming paradigm, but I strongly prefer multi-paradigm languages because one size does not fit all.

  23. Chris in Calgary says:

    > Ideology is NOT a good reason

    Great quote. Full stop. Don’t need to follow it with anything else. Bravo.

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.