Approaching projects - Part 8

- 22 min read - Text Only

Do you have a project, a skill, a goal you'd like to complete or progress? Are you on your own, lacking leadership and structure, or could use more of it for yourself and your team? Here's what I've discovered works for me as a solo software developer and later as an engineering manager. This is a series of posts, the next post part 9 on wrapping up comes next.

In the last post Approaching projects - Part 7, I described how to organize, execute, and monitor the progress of a project's tasks. This post will focus on what to do while the project is being developed and implemented, how to cultivate and curate the plan and respond to changes.

Throughout this series of posts I will be using this user story: As a person with money, I want a custom computer desk to fit in a weird spot in my home. With the chosen solution: Build the desk myself.

Cultivating the plan

The difference between a project plan and manufacturing is that the project is being done for the first time, while manufacturing has been done already in controlled circumstances. A project plan will not go smoothly on all points, this reality has to be accepted. So while you may have written all the tasks and then delegated them to another human to execute and assemble, your job isn't done. Unless it is, officially. But be open to amendments and deviations. The next step is to proactively cultivate the project.

In my eyes, agile is about cultivating a project through its lifecycle.

Now, I have not studied agile or read books on it. What I have is experience from multiple companies since 2010 in their development cycles and have observed other teams operate with different models. One employer's model was single-developer-waterfall and they called it agile. I disagree with the label, but it shows how every place has their own definition of agile.
So what's my definition of agile? Agile is about building satisfactory deliverables by allocating and directing resources at the optimal time with a flexible perspective on the needs of the deliverable. Sure, there's more too it.
A shirt that says "Weeks of coding can save you hours of planning"
I have this shirt, though this is not my photo.
The process I describe in this series has some differences from by-someone-else's-book agile. There's a lot of work that occurs before writing production code. Our time is expensive. Let's apply our time where it is most effective and then check if we're still on target regularly.
My current employer made the mistake of saying yes to a client on every line item. So desperate for a deal they ran an acquired company into the ground with technical debt that it was shuttered a few years later after trying with another client. Wait, if you build what the client says they want, won't they be happy?
The deal ended because their needs were not satisfied. That project burnt developers out and they left. When the acquired company got shuttered a few years later (this year), the last remaining developer and sales guy of the acquired company left. Said developer joined the sales guy somewhere else with a new VC fund. Best wishes to them, maybe the two will make different decisions this time.
The agile manifesto actually mentions focusing on collaboration instead of contracts. This short hint of a story hints at why cultivating a project is so important. Yeah, my place is making mistakes, but at least I'm learning from them at a distance.

So what is cultivating a project? Within agriculture it would be to raise a crop under controlled conditions to bear useful and bountiful product. Agriculture requires preparation of the land, seeding, regular follow up, and harvesting. There are several parallels to software development with this description. But what I'll focus on here is the regular follow up piece. Discovering and satisfying the needs of a project is intangible. And like any intangible deliverable, tests are necessary to ensure the deliverable will be satisfied. When a tangible test is not satisfactory, then an alteration to the plan is appropriate. Reacting and correcting the plan is essential to cultivating the project.

Reacting and Guiding

Delivering the right thing requires staying in touch with the needs of the stakeholder and sponsor, here by labeled the "customer". Often one of these two things happens:

  1. The customer's communicated expectations are not accurate and aligned with their needs for which the plan was designed for.
  2. The plan was not accurate in the approach to deliver what it was intended for.

Inaccurate Expectations

This is the premise of agile, get stuff out so you can respond quickly. But before all the code and all the expensive effort... Keep in mind the user stories, the resources available to the end user, and take some time to "play house".

When you're designing something to be used by a human in the real world, consider role-playing as them with some hand drawn wire-frames. You may have to shadow the end user to understand their workflow, their personality, their resources, their expectations, their irritations, and so on.
A prior employer would fly all roles to job sites to do installation and go-live support. That first few days when our software became the software to use. Yes there were roles specifically for the process of implementation and technical support. They have been at the job site for months or weeks respectively, but developers, quality assurance, and others are on the floor to enable and guide the end users with in-depth knowledge of the product while workflows are improved.
Even after taking literally months to configure everything with the customer (the employer of the end users), there are mistakes, there are edge cases. You will encounter the same in whatever project you have.

Upon discovering a discrepancy in expectations and reality, first start with the simplest least technically involved work-around possible. Think of it as a splint. It is something to get you by.

It could be that a checkbox just dumps some constant text into a comments box, when the long term solution is to have a discrete value communicated between the client, server, database, and so on. Sometimes these stop gap solutions are left in for years. But that's life. Sometimes choosing to ignore it is acceptable to the business.
Is losing $15 of some product here a month worth the engineering, installation, etc. of long term measure here? Risk is a big part of this decision making process. In the chemical industry, not taking care of dust accumulation can lead to explosions that result in loss of life. Explosive dust can collect above those foam ceiling tiles. One solution is to have someone open the ceiling and clean them out each month. That could cost about $350 in labor each time. Another is to remove the ceiling tiles so dust cannot collect there and any hidden risk (dust collecting on support beams) becomes visible. Removing those tiles may cost $1,500. After five months, removing the tiles is cheaper.
This is a real story by the way. But what are the consequences of removing the ceiling tiles? Air conditioning doesn't work as well anymore in that room! After constant complaints of discomfort of staff in this control room, where a mistake due to fatigue could cost hundreds of thousands of dollars in lost opportunity per hour of downtime or destruction to equipment, the manufacturer bites the bullet and installs in an oversized HVAC system that costs $20,000. Humans are remarkably expensive components.

Sometimes the workarounds will cause more problems, you may have to answer to those as well once discovered. This is just the nature of constrained resources, limited ability to scope the impact of a decision, and what makes business sense at the time.

While figuring out and installing a workaround, I expect that you'll be exposed to the user story involved. Explore the problem space to find out what was meant to function in that space but did not, if something is lacking entirely, and so on. Iterate on the design, then realign the plan to the currently known reality. In a way, this first possibility (inaccurate expectations) leads to an inaccurate plan. The long term fixes to inaccurate expectations are the same tools as addressing an inaccurate plan.

Here's an example I had with my table. One of my early requirements was to have an insert to bring power to the desk surface with a circular extension interface. I chose one with USB ports too! Turns out these units have intense coil whine which I am sensitive to.

A hole in my desk

So now I have a battery backup on top of the desk and use the hole to pass the speaker cable through. I probably could send the power cables through there too but there is not a matching hole on the other side for routing back up. The wires will for now just pass between the back of the desk and the wall. This is a workaround with no priority to attend to.

Inaccurate Plan

When the project is not going as expected, you have a few tools to choose from.

  • Design and redesign
  • Experiments
  • Splitting tasks
  • Removing tasks
  • Adding tasks
  • Recurring Scheduled Reviews
This assumes the project approach still has merit and remains aligned and compatible with the most important objectives.

When wholistic redesign feels necessary, consider that action fraught with risk and avoid.

This is step 8, you should have done some feasibility and validation by now, right?

When a partial redesign (or a patch) is necessary, do due diligence to measure the impact of the change and disclose this. If the impact is unknown, consider an experiment. Review the above tangent on explosive dust if you need justification on impact analysis with regards to changes. When making designs do review with the peers and maybe the stakeholder if appropriate. Include the findings on why the existing design did not meet expectations, the reality encountered, impact analysis, and the change proposed with sufficient justification and disclosure of resource expense to perform the change and long term resource commitments with the change applied.

Often enough, the optimal thing to do is unknown when expectations are not being met. Figuring out a known optimal thing may require experimentation. As a reminder, experiments should be agreed to by the sponsor. Experiments consume limited resources in addition the project execution. Further, experiments can harm reputation when communicated as a promise to solve something and the experiment did not provide a satisfactory finding. When a project is in motion, maybe even partially installed, an experiment may be to measure the effectiveness of a workaround. Additional experiments may be proposing and trialing other workarounds. At the end of a series of experiments, promoting the most optimal workaround with planned refinements may be the best move.

Regularly, tasks turn out to have a greater scope than the title and description suggests to satisfy its acceptance criteria. It is on the developer or implementor to escalate, they should escalate, but it happens enough that some proactive attention should be given specifically to identify tasks that should be split apart. Usually the solution is to take the acceptance criteria and piece it apart like a project.

I think that making a sub-task just to add unit tests is inappropriate and not organizationally healthy. Rather what I'm suggesting is that the deliverable of a task be taken apart. For example, a task like "Switch from server side sessions to encrypted cookie sessions" comes with a lot of baggage. It is in fact project scale. Examine what's been attempted (or would be) and see where divisions can be made.
For example, can we set a cookie with a string that encrypts and decrypts successfully using a proper cryptography library? Next, can the session middleware extract the cookie and set up thread locals with the session state or apply it to the request object? Last, can we capture changes to the session state and update the cookie on the request going out? What if the cookie is modified during view rendering because legacy .jsps are involved? Or do we have to buffer the response and capture changes at the end of rendering because legacy?
You will find yourself as a planner ignorant of the scope of some of the things you write. Developers and implementors will sign off on tasks and not recognize the true scope of what they agree to. If this becomes a regular issue, look for process improvements, do not resort to personal blame. If it is a regular issue with a specific individual, that's an human resource process topic and I will not go there.

Removing tasks that are no longer relevant is easy. It's like removing dead code. Yoink it's marked as "Done" ✅ "WON'T FIX" 🙅. Discuss it with the developer or implementor who would do it, then remove it from your tracking method of choice. Sometimes this comes up because the customer's needs changed, for example they upgraded something else early so a compensating or compatibility process is no longer needed on something. Other times its because the solution to a prior task already covered the functionality documented in the task under consideration. Lastly sometimes it's just a duplicate. No biggie.

Adding tasks is usually done due to new designs, missed details, tasks completed but not fully meeting acceptance criteria, or splitting up tasks. That or bugs were found. Maintain quality of task descriptions. You may need to come up with templates for on demand new tasks, such as bugs.

Lastly, recurring reviews. Some call this backlog refinement (formerly known as backlog grooming). The definition linked is exactly what I am thinking of. Get the team together to review progress made, collect outstanding ideas and concerns, alter existing tasks and delegate work. If this is a solo project, I still greatly recommend a time to step back and switch perspectives. What you discover and think about in your role as developer and implementor may have different meaning and be actionable at a higher level perspective. When the project is being actively deployed, stay in frequent touch with the end users or someone who acts as an interface between end users and the team making changes to process and resources. Signals from end users will affect priorities at this recurring event.

Wrapping up the install, roll out, deployment of a project comes next!

The main takeaways of this post are:

  • The needs of a project becomes clearer the closer it is to being finished.
  • The known needs are intangible and must be discovered through tangible steps.
  • Inaccurate expectations by the customer often need workarounds and later improvements.
  • Revising inaccurate expectations will result in an inaccurate plan.
  • Inaccurate plans can be improved by redesign, experiments, adding / removing / splitting tasks, and recurring examination of the projects execution.
  • Changes to designs, even if small, should come with impact analysis, and short + long term resource analysis

This post is the eighth of the series, part 9 on wrapping up comes next.

You can succeed! Explore the communicated expectations of your stakeholder and sponsor. These expectations may be inaccurate and it is up to you to respond by cultivating your plan towards the optimal solution. No plan is perfect, be ready to respond to changes and do so regularly.