Conceptual topics overview
Up until this point, we’ve mostly been focusing on the reference aspects of API documentation (the endpoints). The reference documentation is only one aspect of API documentation. In this section, I’ll cover the main conceptual topics that are commonly found in API documentation. Rather than “conceptual topics,” you might consider this type of information the “user guide.”
The following are common conceptual topics commonly found in API documentation:
- API Overview
- Getting started
- Authentication and authorization requirements
- Status and error codes
- Rate limiting and thresholds
- Quick reference guide
- API glossary
- API best practices
Beyond the sections outlined above, you might want to include other tasks and tutorials specific to your API, based on what you expect your users to do and the business scenarios for which they’ll use your API.
In each concept topic, I’ll provide general descriptions and overviews of what these sections contain, followed by examples from actual API documentation sites.
With each topic, there’s an activity for you to go into your open-source project and evaluate the conceptual topic at hand. Although many of the conceptual topics are straightforward, when you look at the information in context, that is, when you look to see how the information is actually implemented, it gets a lot more interesting.
Your progress through this course so far: 48%
70/145 pages complete. Only 75 more pages to go.
Definitely not. Code Reviewer should be able to review the code/diff online without having to run it.
During QA testing, I expect them to confirm that the code actually works as intended, and fail the ticket if it does not.
This is where the problem is. Testing is not about confirming that your code works, but about confirming that your code is broken. If you start thinking like this, everything will find its place.
Since we've got a separate environment for developers, we decided as a team, that the reviewer will do the smoke tests of the task he's checking, before passing it to QA to test/staging environment. It'll put less work on QA, which in many cases was just the same as it's done on dev environment and will also allow to find potential bugs faster, 'closer' to developers.
I would say absolutely not.
The person doing the code review should be well acquainted with implementation strategies and coding standards. Testing should be the job of Quality Assurance personnel and it is actually beneficial that they not know about any of the things a person would need to know to do a good code review. In my experience those who have been involved in the implementation of software are more likely to drive software in a way that won't produce bugs, whereas those who are completely unacquainted with the implementation are more likely to drive software in ways the developers may not have anticipated.
QA and software development require completely different skills sets, don't confuse the two.
Testing should be in the form of both black box testing and white box testing.
I think it's also beneficial for the code reviewer not to have attachment to the code they're reviewing which is likely if they're made to invest time on testing it.
We've had similar discussions, not so much due to bugs getting past QA, but because it's a lot more efficient to catch clear and obvious bugs before they get deployed for QA to test, and avoid the whole cycle of logging a defect, fixing it, redeploying, retesting...
Our not-very-helpful conclusion was that yes, code reviewers should confirm that the change actually works as intended, but no, the developers didn't want to spend their time doing that. So it's still at an impasse.
An alternative which may work (better for a team all in one office than a distributed team) is for the developer to quickly demo the change to the code reviewer. That way the reviewer doesn't have to derail his own work to switch branches, rebuild, etc., he can just look at the functionality over the original developer's shoulder.
The demo alternative works really well in my experience. However, we do it after the code review as we are already sure that the happy path succeeds. We also get the one or two QA's for the demo so that they can highlight any obvious things that the devs might have missed
Code reviewer shouldn't be Accountable for test execution. But, should ensure (should have it in review checklist) that required unit tests are executed and passed. Now these "required unit tests" could be provided by QA team member of the module or pair programmer. Automation is the key for faster code reviews hence avoiding such issue. I think you should at least automate syntax, naming convention, etc. review part so that reviewer can focus more on issues which are manual and costly to automate. Also small investment in unit test automation will reap big benefits in longer run and avoid such embarrassing situation, as you increase unit test coverage probably by 5% in each sprint.
Please read my other comment first! I think that code review can include testing if the reviewer thinks it helpful and / or necessary, but shouldn't have to. Review and testing tend to catch different kinds of bugs, partly because they tend to be driven by different thought processes, so forcing people to combine the two will run the risk that you'll do neither particularly well.
Reviews will let you see problems regardless of test data, but it's hard for a reviewer to spot things that have been missed out rather than are wrong - this could usually be seen more easily in a test. Reviews will let you spot code that works but is badly structured, reinvents the wheel (future maintenance headache), or performs poorly in some circumstances. System / integration testing is good at chasing things from end to end, up and down the levels of abstraction in the code - sometimes it's hard to keep track of all this during a review.
No - you can't do everything, you must delegate.
I've had to do this before when working with... well, let's say 'less meticulous' devs, and it's simply not possible - to properly perform manual testing takes time, so you'll have to steal from other tasks, or work overtime.
The problem here is the whole dev/QA I/them thing; it's 2017, it's not ok for quality to be "someone else's problem". You're all on the same team, you all need to pull in the same direction, and quality control is everyone's responsibility.
Flip it round; why should code that can't be proven to work properly be allowed to be shipped just because it's formatted correctly? The difference between code and text is what the code does when you run it... so yes that absolutely should the bare minimum requirement as a review to get that right!
On the flipside then; if I hand over a changed piece of software to our test team and it doesn't function, am I not wasting both their time establishing that it's broken, AND my time (in both the hand over and the subsequent rework), when perhaps spending a short time validating the changes might prevent that?
I'm keen on running the software during code review as a way to see the changes for myself anyway, as opposed to a "this looks like that has changed, nice" situation where you picture how things work through looking at the changes.
A recent example - when looking at changes to produce some new print output, I couldn't for the life of me produce anything when I hit the print button. Turned out that we (me during review and my boss, who'd written the code) had both overlooked a tweak it needed, as it "looked like it would work".
It was only upon actually trying to hit print that I saw it wasn't working as expected, and it was resolved before it was formally released to test.
It saved time in the long run, because we didn't end up handing over a feature that would have immediately failed during testing, but if I'd not have built the app and tried the feature locally, at the point of code review it would have been passed out to testing - incorrectly - because the code looked like it was OK.
I would argue that the basic functionality should be validated even before sending out to code review by the developer implementing the feature.
A Code review should catch all other obvious errors for example not catching exceptions, not checking for null etc and make sure that coding standards are being met and the code is readable.
QA should be for testing the system as a whole and regressing the product.
The happy path should always work before sending it out for a code review or to QA
In the case you mentioned, there is a considerable amount of time wastage even by sending the code review (before a self test) because at that point you dont even know if your changes work.
Having other devs QA your work as part of code review is IMHO a waste of time.
Personally, I prefer to use code reviews as a tool for sharing knowledge and best practices.
Why not send the code review to all the developers on the project?
Junior devs might pick up new tricks while mid/senior devs might catch potential issues.
Having just 1 person aka the lead do code reviews is probably not very efficient.
Personally, i feel doing QA during code reviews is redundant. The developer should be smoking the application to make sure at the very least the happy path works.
Another alternative would be to have a test share after feature where the dev shows the feature to the rest of the team and they ask questions or try out various paths. It shouldn't take more than 10-15 minutes.
If you really want a dev to do testing it might be better to write a system test if you cant unit test it.
Code review is just that; code review. It is another developer checking your code changes, pretty briefly but still checking it, to make sure you did not miss something obvious. So code review should not include manual testing.
BUT - before you even get to code review, you should have tested your code changes functionally. You can not ever, except for the simplest of changes (like a wording change on A UI for example), make a code change and not do your basic check - which is running the code to ensure that it basically works. It is not your job to do QA's work, meaning you are not responsible for full, detailed testing of the app or at least the affected parts of the app. But it is your job to do at least a surface level test of your change in the running program.
I will be honest - it is pretty scary if you have in fact made a non-trivial code change, and submitted that code as a fix to QA without ever having run your code. It seems obvious that taking 5-10 minutes to see if your code at least appears to run as you intended it to is just the only way to go. What you describe it this scenario:
You: "I am done fixing bug X. Please review it."
Other Developer: "Did you test it?"
You: "No."
Seems ridiculous, doesn't it?
Sorry for sounding harsh, but it seems to not even be a question to me. It has nothing to do with code review, but it has everything to do with having at least some degree of certainty that the code changes you made did not break something, even before submitting the code for review.
Is your QA department also to blame? Of course; they clearly did not test an affected feature, and I assume in your ticket to them you noted possible affected areas of the program. But you are more to blame, not because you missed something in your testing - that would have been OK because testing is not your primary job - but because you did not even test at all.
Exactly right. Every developer should test its code before commiting. Then Code Review should only consist of checking the code, not re-testing it again. If someone does commit code without testing it, then educate them to do that. The only exception is, when you are starting a project and there is not enough functionality or the program does not work yet, then there is no helping it, but to commit without testing. However after program is runnable and somewhat functional. You should test it thoroughly before continuing further.
This is bad advice. Developers should commit freely and often. In fact, uncommitted code should never be tested.
It is OK if there is buggy committed code. We have version control for a reason.
I suspect you are confusing "commit" with "push into production".
Sorry mate :) I was in the context of svn, because I am forced to use it in my work. But of course I ment push into non-dev branch. Either it can be pushed into some private dev branch or in WIP pull request, but I still stand by my opinion that developer should test it's code before merging.
I never done code reviews. So, I looked it up to see what others have said on the subject. This one looked good:
https://blog.jetbrains.com/...
This is one of the things that they look for:
Does the code actually do what it was supposed to do? If there are
automated tests to ensure correctness of the code, do the tests really
test the code meets the agreed requirements?
And of the sites I looked at it said you should automate the syntax checker and not do that manually.
I think ideally we try to make it so we don't rely on QA. We do all the QA ourselves and QA is just there to make sure that we didn't mess up our own QA. Never blame QA, but take responsibility for the work yourself.
As you are using a Continuous Integration/Continuous Deployment process, I can guess that you are working in an agile environment.
One principle that I like to instill in my own team is ownership. It means that the whole team is accountable for everything related to the project success and failure. Due to that, the first thing that we do differently when I compare my team to yours is the responsibility of the code review. This is shared with the whole team, so as the team lead, I am doing code reviews but as anyone else in the team.
A second principle, which I try hard to achieve in my team is to drive all team members toward a mindset of removing any waste.
So in your particular case, we would build a defense in each step of the process. The first level of defense is the developer itself, which is responsible to implement a feature according to its requirements. For sure, she is also responsible to verify that the feature works as described in the requirements written by the business analysts. And definitely to have automatic tests written that verify it.
Then the second level of defense is the code review! There we want to catch anything that the developer would have missed, including in requirements. So, yes, the reviewers also test the feature. Why? Because our goal as a team is to ensure that QA doesn't find anything! So why do we need QA at the end? QA is our third, and the last line of defense, in case both developer and reviewer missed something. After that, we are impacting our customers.
Not doing it this way, would mean that we would have waste in our process because transferring a feature from developer to reviewer has a cost, then transferring to QA has another cost even higher and finally hitting the customer has again another higher cost.
If you introduce all these safeguards and they increase overall work by 50%, but waiting until QA catches the problem increases the work by 20%, then you just generated quite a lot of waste and inefficiency.
Even worse if everything works perfectly, because you've then introduced an infinite amount of additional waste , compared to the baseline of 0.
But moreso, if you are able to write automated testing for the feature that positively confirms the functionality... why is QA wasting time doing manual testing?
If all those 'safeguards' increase your overall work by 50% then you have certainly some other problems. It is definitely not the case in my team.
The purpose of code review isn't only about catching issues in code, bugs, regressions, missing requirements... it is also one great tool to grow the responsibility, ownership, and motivation of the team and finally to spread the know-how. Which payback quite quickly. It is true that the time that you invest upfront, can be higher at the beginning when your team is in the forming, storming, norming phases but your job as a team leader is to bring the team in the performing phase as quick as possible.
When you have QA catching things, this starts a feedback loop which can be quite time-consuming: the creation of tickets, discussions, code change, doing again QA and it can go on. This has a cost which is higher than some feedback in a pull request to one of your developer team member. The feedback loop is shorter, so less waste.
"why is QA wasting time doing manual testing?"
Great question! I have struggled with that for quite some time and also thought we don't really need QA in our setup. But I have learned that people doing QA has really another way of thinking than developers, and they find most of the time things that the developer didn't think about. This is why that in our case we re-introduced QA testers with a little motivation competition for them to find things that developers missed and for developers to bring something in which QA testers don't find any problem.
At the end, all of this deeply depends on your situation, environment and what makes sense in our case might not make sense in yours.
Re. waste, automation etc. Here's a very interesting case study with proper numbers where automation, testing etc. has reduced waste hugely. (Not saying that this will apply exactly in all cases, but it's a decent counter-example to the automation=waste argument.) The whole video's worth watching, but the particular bit is https://www.youtube.com/wat...
Thanks a lot Bob, I finally had the time to watch the video, enjoyed it a lot and definitely shows my experience but with less developers ;-)
The flaw is in expecting a team lead to do a code review; the flaw being that a single person is the dependency on checking and validating code. As another poster has already mentioned, by creating an environment where the code reviews are undertaken by the whole team there is a better chance that different views by multiple people would throw up better chances of spotting chances.
Expecting that the team lead should only do reviews is just having a God complex, and Leads are not Gods.
I think that regardless of whether code reviews include testing or not, there needs to be agreement on the nature of the task, who does it, when and - in some ways most importantly - how much time it will take. I know that some coding tasks are hard to do but easy to review etc, but it might be possible to say that on average if some code took X hours to write it will take X * R hours to review, where R is a constant (e.g. 0.2).
Then in order for reviews to actually happen, tasks need to be estimated as taking X * (1+R) hours out of the team's time budget rather than just X. Otherwise people are having to find time in the margins to do reviewing - if it's not considered important then maybe that's OK, but if it is considered important that's not.
This is all just a baseline, i.e. before your question comes along. Then if review needs to include testing, R is going to increase, which will mean that the time per task will increase, which means that fewer tasks will get done in a given amount of time.
I know this is all blindingly obvious stuff, but sometimes organisations, i.e. managers downwards, aren't honest and agreed about the costs of things and what's important and what's not.
If this is an attempt to get extra testing hours for free, then I would consider resisting it. If it is an attempt to increase the proportion of the time budget spent on testing then that's a different question. (One I'm deliberately not answering as I think that this one needs answering first.)
Before answering your main question, I would second Ankur's comment - why only have the lead code review? This seems a bit odd fashioned and part of an authoritarian style of leadership. Increase your code reviews to two of you imho, let the whole team learn from their colleagues mistakes. Plus they might spot something you don't. Now for the actual question, I think not. Basic testing (does it run + has what I've changed worked) should be done by the dev or devs prior to putting in the code review. If your frequently getting to QA and this is not the case, then its something to improve on in your team. A dev should always be able to say 'but it worked on my machine' ;).
I agree with others that some minimal testing should, at least, be done. Primarily testing that the "fix" indeed fixes the bug and that it does not break other possibly affected areas.
Also, I think that code reviews should be done by the whole team. Code reviewing is a very good opportunity to learn and also, having more eyes on the code improves quality.
My two cents: don't do code reviews, do peer reviews. Peer reviews should include everything: reviewing the code for cleanliness, adherence to standards, design, and testing to ensure that it functions as intended.
The sooner a bug is discovered, the easier and cheaper it is to fix. QA is there to catch things that fall through the cracks, not to be the first line of defense.
My two cents: for some teams, do both - peer reviews and code reviews.