A Better Way of Working
There are countless productivity solutions, but they largely fit within one of the following categories:
- Documentation, word processing and note keeping
- Project management
- Presentation
- Spreadsheets, databases and charting
- Graphics and multimedia
- Presentation
Roobrick, however, is different. Roobrick encourages an adjustment to the way individuals and teams approach certain kinds of tasks such as:
- problem solving
- decision making
- assessments
For these kinds of work, people typically attempt to solve the problem as quickly as possible. However, in many cases this is a missguided approach because the person or team will not be working with clear objectives. Let's look at an example to see why. The example is based on defining OKRs, but in case OKRs are not important to you, keep in mind that numerous other types tasks could have been chosen such as assessing a candidate for a job. Note that the example is centred around the process of defining new OKRs, not the assessment of key results (KRs).
Let's say an organisation uses OKRs for goal management and Joe is asked to define an OKR for his team. Writing good OKRs is hard which is why there are countless books dedicated to the subject. But after spending time drafting his OKR, how would Joe know if his OKR is defined well? And how would Joe's teammates be confident the OKR is defined well?
Typically Joe and his teammates would review the OKR in light of their individual knowledge about OKR best practices. Since this collective knowledge is dependent on those involved, the quality of OKRs within the organisation will vary. However, it isn't tolerable to have sub-standard OKRs because goal setting is so important - it impacts the teams' alignment and what they work on for significant periods.
In the case of Joe's task to define an OKR, he needs criteria that can be used to determine how well his OKR has been defined. For example, the criteria may indicate the objective should be succint and the key results should be aspirational. And whilst he may choose to assess his OKR on completion of his assignment, it is preferable for him to assess during the drafting process to ensure any shortcomings can be addressed early. This approach is similar to agile development where software developers always have a working version of their code meeting a certain quality stndard.
Let's say Joe is aware he should assesss his OKR against some OKR writing guidelines. Would Joe be able to find the guidlines he needs? The best case scenario is that Joe's organisation has a central repository of processes which includes guidelines for writing OKRs. Most likely, however, Joe's organisation has not defined any OKR writing guidelines. Somwhere in the middle exists the possibility that a department or individual in Joe's organisation has taken the time to write some notes on OKR definition best practices, but Joe is unaware of them. We land at the likely possibility that Joe needs to create guidelines for assessing his OKR against. This would be very unfortunate because other staff in Joe's organisation would be faced with the same challenge.
Defining OKRs is a common task across organisations, so it wouldn't make sense for every organisation to define separate sets of guidelines for defining how OKRs should be written.
If we assume Joe works in an organisation that has defined best practice guidelines for crafting OKRs, would it be likely that Joe would use the guidelines to evaluate the OKR he's writting? Would it be evident to Joe's colleagues that Joe has assessed his OKR against the guidelines? Would Joe be able to share his self assessment with his colleagues such that they can agree on the adequacy of the OKR he's defined?
The likely answer to all of these questions is no.
Summing up, to ensure Joe drafts his OKR to a high standard, he needs to:
- Be aware of the best practice guidelines for writing OKRs;
- Assess his OKR against the guidelines, preferably whilst he drafts it; and
- Be able to share his assessment with his colleagues in order to create a agreed understanding of the quality of the OKR.
Joe and his colleagues need these same features when they work on other types of tasks such as:
- Assessing job candidates
- Decision making
- Competitive analysis
- Managing employee development
- and any other type of task involving some kind of assessment, grading or auditing
Joe and his colleagues need a better way of working:
- They need to be aware of the quality of their work.
- They need to be able to assess their own work against agreed criteria and guidelines.
- They need to be able to collaborate on assessments of shared work.
- They need a solution that ancourages this way of working.
Software developers have been working this way for quite some time. When a developer makes changes, they are usually following some form of programming guidelines. They often have tools such as linters and automated testing to assess their work as they develop it. Using tools such as Bitbucket, they create pull requests to get their changes reviewed by their teammates so they form a shared agreement of the quality of their work.
Unfortunately, however, most other types of knowledge workers don't work this way. As a consequence, poor decisions are made, employee development is weak, inadequate competitive analysis is performed and quality suffers in inumerable other ways.
This is the problem space addressed by Roobrick.
Let's go back and have Joe define his OKR with the aid of Roobrick. Joe would take the following steps:
- Joe would start by searching for the OKR writing guidelines in Roobrick.
- Within Roobrick, Joe would create a new assessment for the OKR he is tasked to write.
- Joe would start writing his OKR. Whilst writing the OKR, Joe would adjust his self assessment to capture the strengths and weaknesses according to the criteria.
- As Joe starts finalising his OKR, he increasingly refers to his self assessment, especially those areas he has marked as being weaker. He makes improvements to his OKR and adjusts his self assessment as necessary.
- Joe ends up with an OKR and a self assessment of it against the OKR writing guidelines.
- Joe shares his OKR and his Roobrick assessment of the OKR with his colleagues.
- Joe's colleagues optionally make adjustments to Joe's assessment which may trigger Joe to refine the OKR.
- Joe and his colleague may choose to change the status of the assessment to done when they are satisfied with quality of the OKR.
Whilst this process may seem complicated, it really just involves Joe being aware of the OKR writing guidelines and assessing his OKR against them. The methodology is supported by integrations between Roobrick and other solutions that streamline this way of working.
For example, the Roobrick integration with Jira allows the quality of issues to be assessed. For example, defects are much easier to fix if the issue clearly details the steps required to reproduce them and the differences between the expected versus actual results of the relevant actions. The integration provides a Roobrick panel within the Jira issue view where this assessment can be done. The user can choose to write the assessment as a comment against the Jira issue. This integration prevents the user from having to leave Jira.
Similarly, the Roobrick integration with monday.com provides a view allowing your monday.com board items to be assessed against any relevant criteria. For example, you may have a board with items corresponding to job candidates for a particular role. You no doubt have requirements for the role which can easily be expressed in Roobrick. Within monday.com, you can switch to the Roobrick view, select the candidates to review and assess them against the criteria. Roobrick updates the main monday.com board with the assessment summaries. So, if you're a monday.com user, you can still centralise your work and workflow around monday.com, but incorporate the Roobrick methodology.
Seamless integrations such as these mean the Roobrick methodology will become a natural part of your regular workflows. It would be inconceivable for software engineers working on corporate systems to push their code to production without adhering to agreed standards and having the quality of their code reviewed, so why is it tolerable for so many other important kinds of work to vary in quality? We have accepted the current way of working because we haven't the tools to help us adopt a better way, but Roobrick and its integrations with other solutions solves this.
You may also like some of our other posts...
- Retiring Jira and Confluence apps
- Sharing rubrics and assessments
- How Rubrics Save Time and Effort
- Unified Rubric
- Defining OKRs - Best Practices
- Notes app for monday.com
- Help for the Roobrick app for monday.com
- Roobrick app for monday.com
- Markdown in Rubrics and Assessments
- Better Decision Making
- Single Point Rubrics Over Analytic Rubrics
- Craft Pack
- Rubrics in Business
- Making Robust Decisions
- Understanding Roobrick