Working Practices - Chris Hogan
My working practices have been formed by interest in Software Engineering (I said interest in, not belief in), agile methods (even before the term "agile" was applied to software development) and an awfully long time enhancing and supporting our APL-based dealing system 4xtra.
My aim is to be both productive and "lazy" - I don't want to keep on reinventing the wheel, not do I wish to have to do anything which distracts me from solving the particular problem I'm working on.
To this end I'm used to deploying a framework within which I try to do most of my development. 4xtra grew to have a considerable number of tools & I've become comfortable using their more modern descendants, but even in my early days I used the RX Skeleton (a framework developed by Phil Gray, Stan Wilkinson & others) & then tools produced at Allan D'Morias & Associates, where Stan, Marc Griffiths & I worked together for a good number of years.
So what did I pick up?
Timesharing - now that may seem a daft thing to say, but the way that one had to develop timesharing systems has influenced the way I design systems. It's obviously a very thin client, but the Sharps environment was, in a sense, Personal Computing. One had a pure APL environment isolated from the considerations of the hardware. Having said that, most of the systems I worked on at that time were intended for use by a physically dispersed set of users. Coupled with the erratic nature of telephone lines in those days, it instilled good habits with regard to recovery, restartability & general interaction in real time between users of the systems.
PCs - the immediacy of user interaction with a PC was a shift from the "block mode validation" of screens & printer terminals used for Timesharing systems. It took a while to get used to, but I hate it if I have to go back to the "old style", which influences the way I design web-based systems, which do have more than a hint of blocking up communications.
Function Files - I still use a great-grandson of the function filing system we originally developed for ADM. I reproduced the functionality for 4xtra & PhilLast rewrote the whole thing for Dyalog, where it grew into Maya our code management system. I know Phil has now gone on to produce Acre for the Carlisle group, but that solves a different set of problems. Our development group is loosely scattered around London, sometimes our clients have been in London too, but have also been in Stockholm, Porthcawl, Buenos Aires - in fact if you go back to the Rank Xerox days practically all over the world. The point here is not to share development over a dispersed team, but deliver updates to remote clients (although an ability to develop code remotely is a side effect). We don't use the internet in the same way as Acre, as Maya pre-dates the appearance of the high upload speeds which make these features of Acre practical.
Agile Methodologies - Agile methods used to be called Lightweight (which probably smacked too much of frippery & friviolity) and before that Rapid Application Development. We APLers were "agile" even before RAD. The basic principles common to all Agile/Lightweight/RAD approaches (Incremental Delivery, Direct communication or co-location with the end user, small teams, prototyping, etc.) were all fundamental to the way I developed APL systems from the very first day I started using it. I've often wondered which particular elements (the language itself, the timesharing environment, the isolation from hardware) drove the near universal adoption of this approach, but I'm jolly glad they did.
Software Quality - the concept of ISO9000 has always fascinated me, in that the idea that production quality is controlled in such a way as to make it simple to keep turning out the same widget to a standard specification. Not that I see a lot of direct application to software development. We don't churn out dozens of replicas of the same program every time we start a new project. But even the "pattern development" beloved of many developers is an attempt to ensure the reproducable quality of a piece of software. If one can have a standard, easy to use recording tool, then use it.
OK, enough background. What to I actually do?
As I've said, my code sits in a function file. The workspace is built dynamically on loading by a single resident function. I then develop as you would expect in the interactive session. Periodically I make sure the function file is updated with my changes and reload the workspace. Why? Because I don't want all my changes saved automatically to file (some, perhaps many, are failed experiments), but I do want to perform an "integration test", making sure that the changes I've made don't trip up the system & that I have included all the relevant alterations I've made. Now it might seem tedious to keep on reloading, but I find that because I do it this way I keep focused on fast loading times & very concise paths through the GUI (unlike, say PhilLast, just about everything I do has a major GUI element). This benefits the user in the long run, they have a system which is very easy to navigate, with no long complex interactions - all because I'm too lazy to keep working through a GUI to get back to the point at which I was working. Because of the interweaving of the GUI At this level I can't really see the use of a "test harness" - I've tried & the damnusers never press the buttons in the order that I want them to - so I design a clean navigation path & independent dialogs to minimize the complexity of the interface & then I can test APl functions which have little, if anything to do with the GUI directly.
Separating different different aspects of functionality - I said my code is very focused on the interface, by I don't like interlacing the GUI elements with the "deeper" system functionality. This means I use my framework to establish a standardized GUI and generate the GUI in a distinct function (or group of functions). The framework handles the interaction with the user & gives a very simple interface to the callback functions. This often means that there appear to be major stylistic differences between the parts of the code which I write: The GUI builder has a script like appearance, calling utilities to generate the GUI elements; The major control logic often uses IF or SELECT constructs to capture all of the calls to different elements of the system in a single easy to read function; but the "deeper" functionality is coded using computational logic & a heavy dose of dfns. I don't use dfns for the major logic, I find that it's too difficult to step through them at this higher level: once I'm down to smaller functions which can be called directly to debug them, then the power of dfns comes into its own. Or maybe I'm just schizo.
I'm not a great fan of "Object Orientation" - now before I draw too much opprobrium from OO fans, I do think encapsulation (I use namespaces to isolate elements of my code & don't "reach into" a namespace from outside), polymorphism (after all, if one is going to seal up one's code in something like a namespace, then a robust interface is essential) & inheritance (consistency is always a good idea) are all things I can agree with, but I find the whole OO thing yet cult, which introduces another layer of jargon between the developer and those who use the developed code. It was bad enough when working with "domain experts" in large organizations, but recently I've been working with small groups of people, often with an educational background (non-technical) & they wouldn't know a "class" or a "method" if it bit them on the bum. My ultimate aim is to have a jargon free approach to development: I've taken to doing thingsstep by step, not in a series of iterative timeboxes, for example. I can still define a step as a fixed period of 1 day or 5 days, but it's not as confusing a term.
- Of course, KISS - Keep It Simple Stupid.
General Points
I avoid global variables, unless they have to be persistent between function calls & so become "properties" in a namespace.
- Many of my functions have a strand assignment as line one, breaking up the right argument into "keyword" parameters. They often have an assignment as the last line to gather up several variables into the result. I know this can be done by Dyalog automatically now, but much of my code is pre-version 12. Also I use overtakes to add defaults to the end of the list. I can't supply a default for something embedded in a keyword style list of arguments, but at least I order the items in the argument so most frequently set go to the front.
This does agree with DickBowman & "data to the right, controls to the left"
- I can't remember when I last used right arrow (other than to continue a stalled function) or had a function with a line label in it.
- I use error trapping for errors - not alternate logic paths.
- I stopped using diamonds when I got a decent screen resolution. Actually I have thought of one use for them. I have a function which generates a colour scheme, based on a hex code code argument. It returns a two item vector, the first being a vector of hex colour codes, the second is text strings which explain the colours. Each pair is generated by two statements on one line separated by a diamond, so the text assignment also serves as a comment for that line.
Generally I have adapted John Jacob's axiom (referring to paper documentation) - "If you have to turn it over it's too damn long" - so if I have to scroll to see the whole function, it's too big & should be broken into smaller logical sections.
I agree with Allen Holub - undocumented code is worthless (that is unexplained code) - but I loath over-commented functions too. Don't tell me 'what' it's doing, I can see that, tell me 'why' you're doing it.
I don't use the tracer (I know what is happening to variables & if I have to stop it then a jot on an otherwise blank line is usually sufficient), so my introductory comments are at the beginning, but all/any end-of-line comments are displaced to an alignment which keeps them clear of the code.