Would you say that bugs are not technical debt, but they can be caused by technical debt?
For me the technical debt would be manifest in the way every new bit of code takes longer to write (or at least to write correctly) because it has to work round the bug, or system design choice, that nobody can fix for Reasons. It’s ultimately the reason to rewrite the whole thing from scratch rather than amend existing code.
Oi, that is a difficult one.
Technical debt definitely causes additional bugs.
As bugs usually need to be fixed ASAP and fixing technical debt can usually be post-poned, I would at first glance say bugs and technical debt are 2 different things.
edit: IMO at the point where a team is actually talking about technical debt there are definitely quality issues that need to be addressed. And as a dev I would usually argue that I didn’t have the time to write the stuff properly.
‘Tech debt’ is often used to mean ‘crap code’. In my view, that’s not right. Crap code is crap code. Tech debt is code that was done wrong, because shipping it now was more important than doing it right. It’s intentional. Code that is just old isn’t tech debt, it’s just old. (In one case I’ve dealt with recently, it was written against a deprecated version of the API (and stopped working on upgrade, because the old api went away), but the new api didn’t exist when it was written. That’s not tech debt. It’s just work. Maybe the work that went into maintaining it instead of rewriting was tech debt.)
A brief example. AT a previous job, I was decomposing a monolithic application into smaller independent components. For two of the components, I chose to use a serialization format that was essentially json. I knew that serialization/deserialization overhead was going to be a performance factor at some point, but didn’t have a good sense of when. I needed to get the thing working NOW, because months had already been wasted on other parts of the project. I had JSON serializers available in both environments, and it was easy to get it going. It got shipped like that, and deserialization was indeed a performance bottleneck. Someone spent a bunch of work making it faster, which made another design choice I’d made in the interest of shipping the bottle neck. When I fixed that, we only got a two fold speed increase, instead of ~30 fold. A binary format was required to make it go fast, but that didn['t happen when I was there. (also: the original slow json version was still an order of magnitude faster than the original monolithic version, and more correct solutions.)
I just finished running a Sprint Review.
For stakeholders I tend to lump a few things into technical debt:
- Actual bugs that aren’t interesting enough to dwell on in the review. May or may not be bugs in our product.
- Non-bugs that we hadn’t anticipated having to do for a particular feature that aren’t interesting enough to dwell on in the review. Things you discover as part of a work item but push into another work item to address later.
Here’s another one:
Six months ago I wrote a Definition of Done for our company (whether or not this is a good idea remains to be seen. One of the things that I included were regression tests . One of the devs argues that this has created technical debt where none previously existed. I think that the technical debt was there all along but now we have decided to do something about it. Thoughts?
Ultimately I don’t really care which of us is “right”, as long as someone writes the tests but I am interested in other people’s perspectives.
You made an issue visible that most certainly existed previously.
The question is though what does the company want to do with the now visible issue?
You could analyse what was the reason there have not been any regression tests in the past. Why did the devs not write those? No time? No knowledge? This needs to be adressed to be doing better with the updated requirements. Requirements alone change very little. There also needs to be an assurance that everyone has signed on to the additional work that is required now. That means the devs get the time to write those tests.
I feel that transparency is the real value. Because once that has been created devs gain a certain amount of freedom from that debt. It is no longer only up to them. But at the same time there is responsibility to do better going forward.
It is kind of difficult to explain in written form. I hope I am getting my point across.
If the regression test requirement is causing a thing not to be done because it would be extra work, that’s a similar result to technical debt, but I’d argue a different cause.
The reason those tests don’t exist already is almost certainly due to lack of time on the devs’ part. They all know what regression tests are and have just about been persuaded that the existing integration tests don’t cover what is needed. I suspect that there is some anxiety that people will expect them to magically produce extra tests with no effort, or blame them entirely for the lack of tests.
The goal of Computer Science is to build something that will last at least until we’ve finished building it.
Nice one, fortune : )
I can’t remember if I’ve mentioned this before but my department has hired several “Agile Coaches” to help us “do Agile better”.
I’ve just reached 50 sprints as Scrum Master, and we’ve been “doing Agile” for nearly 5 years at this point, that makes us the most established Agile team in the department, if not the business.
For that reason my boss volunteered the team to be the subject of a number of “observations” over the coming weeks. This morning I had one of the Agile coaches turn up to my Sprint Planning session, he didn’t say anything, but did spend the meeting chipping in with messages on Skype.
I’m apparently not being graded/scored but I’m sure at some point I’m going to receive some feedback. Fun.
I do love feedback.
I guess it goes both ways. We’re all being asked to provide feedback on how well we think Agile is going.
My Product Owner is going to get some not so great feedback.
Our Sprint Retrospective was about as long as our Sprint Review, which is to say both were quite long.
The Agile Coach dipped out of the Sprint Review about halfway through however so he missed all the discussion in the Retro that he might have had some input on.
I’ve been invited to a meeting on Monday to run him through how we’re doing things.
Never mind. I don’t have a sword these days anyway.
Finally got round to finishing my command-line snapcast stream switcher (last touched last November). It turns out – the documentation doesn’t mention this – that the way to create a group is to kick a client out of another group, at which point it gets a new group of its own (helpfully, with the same name as the old one, but a different ID string). At which point I can assign a stream and a name to it.
I think the idea is that each group is meant to be a group of hosts that always play together (e.g. the machine in the living room, the machine next to it in the study, etc.), so you switch them all together from stream A to stream B. But if I did that most of my machines would be in separate groups of their own, so instead I assign a group long-term to a stream, and switch machines in and out of it.
Overheard the agile consultants talking about their “ongoing assessment” despite my boss assuring me I wasn’t being graded.
Speaking of, as he’s new, my boss asked me to support him in a meeting with the agile consultants, a meeting which he’s now told me he can’t attend.
“Just pop into the meeting room, I’ll be there in a minute.”
“There is one consultancy fee under the table, and two daggers.”
That was an absolute grilling. Good thing I know my shit.