Highligts from BIOMEDevice Boston 2016 conference

I attended the BIOMEDevice conference on the 13th and 14th of April 2016. The conference was packed with suppliers of the medical device space, especially from Massuchussetts. Two conferences have especially rung a bell in my head, and I thought I might just drop my notes here so that everybody can get a feel of what was said:

  • Patient Privacy & Data Security in the Cloud Communication Age
  • Winning Over the Hospital Value Analysis Committees

BIOMEDevice 2016 conference hall

Patient Privacy & Data Security in the Cloud Communication Age

  • Our technology is advancing faster than we can protect it. How can we keep up with the cloud communication age and build sustainable data protection?
  • Understanding FDA’s evolving guidelines and standards to address cyber security
  • How is HIPAA playing an increasingly pervasive role in health data management?
  • Cloud-enabled utilities and solutions – what are the pros, cons, and security risks of storing data in the cloud?
  • Advances in safely transmitting data across various healthcare applications and protecting data from cyber attacks

Michael McNeil, Global Product & Security Services Officer, PHILIPS HEALTHCARE

 

Phillips has a HealthSuite IoT architecture based on AWS (EC2, S3, Glacier, Lambda, SNS)

http://www.usa.philips.com/healthcare/innovation/about-health-suite

They have a way to make sure data is not leaving a country’s borders where it’s forbidden.

Industry challenges:

  • Patient safety (ethical hackers have demonstrated threats)
  • Data integrity and availability – required by care
  • Legal and regulatory obligations
  • Protecting intellectual property – especially when expanding into emerging markets

 

Best practices:

  • Design security at every stage of development
  • Take advantage of well-known techniques (encryption, salting, rate limiting)
  • Train employees
  • Integrate security by design. Security built into the development process.
  • External security testing and assessment.

 

Medical device challenges:

  • Portable and mobile devices (storage medium encryption, hard to remove without tools)
  • Access to device and settings
  • Firewall controls
  • Malware controls (whitelist solutions take away the need for daily updates)

 

Avoid 3 deadly sins of medical device vulnerabilities

  • Uncontrolled distribution of passwords (fixed, default, hard-coded)
  • Failure to provide timely security software updates and patch management
  • Security vulnerability in off-the-shelf software designed to prevent unauthorized device or network access

 

The FDA has clearly stated that you don’t have to the entire re-submission process to address security updates (validation responsibility still applies though)

 

Establish a policy for providers and SOUPs (embed checkpoints in vendor selection, update the procurement process, establish monitoring criteria [frequency of scan and pen testing…]

 

Define a responsible disclosure of incidents process (they will happen!)

 

Conclusion:

  • Continuous threat monitoring of the healthcare landscape is critical
  • Transparency, accountability and responsiveness must be ongoing features
  • Wider dialogue between medical device makers, hospitals, regulators and security professionals will advance innovation in security in the healthcare industry

 

Winning Over the Hospital Value Analysis Committees

  • Overview of the changing marketplace and how to position your product in this tight economic environment
  • USA vs. Europe – what are the hospitals looking for?
  • Important questions you should be able to answer
  • Looking at devices and assessing value – from a physician standpoint
  • Discussing value added services in products
  • Understanding the necessity of usability and how it can determine widespread adoption

Moderator:
David J. Dykeman, Attorney, GREENBERG TRAURIG, LLP

Panelists:
Eric T. Pierce, MD, PhD, Physician Director of Anesthesia Bioengineering, Supply & Technical Support, Department of Anesthesia, Critical Care & Pain Medicine, MASSACHUSETTS GENERAL HOSPITAL
Michael Fraai, Executive Director- Biomedical Engineering & Device Integration, BRIGHAM AND WOMEN’S HOSPITAL
David J. Berkowitz,
Vice President, Healthcare Insights and Analytics, ECRI INSTITUTE

 

Value Analysis Committees are now gatekeepers to inserting a technology into hospitals. Decisions are more and more based on financial factors, clinical benefits are not the paramount factor anymore.

Considerations they have

  • What do they do with former product if there is a replacement?
  • Cost – upfront and maintenance. TCO is king.
  • Clinical outcome – of backed by solid evidence.

 

Eric T Pierce, MD, PHD: how we select devices

Eric is involved in product selection for the Massachusetts General Hospital – especially for anesthesia

The selection process is always changing.

Value in Medical devices = Quality (outcome, safety, clinician satisfaction) / TCO

Traditionally, physicians were big drivers of device selection. They become less and less important.

When a product might be controversial, limited trials are set up.

For complex and expensive products, the process is the following:

  • An ad-hoc evaluation group is formed (physician director, bioengineers, clinician advocates, division leaders, frequent users)
  • Review all viable product options
  • Apply selection criteria (TCO, compatibility & continuity, ease of operation, serviceability, product support)
  • Narrow choice of 2 or 3 products
  • Focused trial of top choices in-service
  • Comparative financial analysis, purchasing folks negotiate
  • Review, recommendation, decision

 

The whole process takes weeks or month

Ease of operation criteria (very important):

  • Intuitive design
  • Simple interface
  • Clean-ability (they recently had a device which screen was damaged to cleaning solutions)
  • Battery life
  • Boot up time (because of emergencies). They time boot-up time.
  • Portability (big issue for them: portable devices get stolen)
  • Mounts

 

Winning over the value analysis committee – David J. Berkowitz, Vice President, Healthcare Insights and Analytics, ECRI INSTITUTE

 

We are moving from a volume-based healthcare system to a value-based healthcare system

The absence of evidence (as far as clinical benefits are concerned) is a showstopper

 

Michael Fraai, Executive Director- Biomedical Engineering & Device Integration, BRIGHAM AND WOMEN’S HOSPITAL

Network security is huge topic before devices are authorized into a hospital’s network.

They don’t buy a quote. They buy a solution to deliver safe & efficient care.

There is an awareness of real cost.

Factors in the TCO: purchase cost, backfill cost, training cost, device integration, software cost, warranty cost, implementation cost, parts, accessories.

It becomes more and more costly to integrate products into EHRs.

 

Panel discussion

 

Mistakes companies and salespeople make:

  • Adding too many features
  • Eliminating features that users do like
  • Not doing enough outcome research
  • Not understanding the user’s work environment (screens too smalls or difficult to read). Send your designers to the environment where the device will be used.
  • Introducing too many variable or deals
  • Not supporting intra-operability (ICE standards)
  • Not being the clients’ time and objectives
  • Not being environmentally responsible

 

There is an EPP (Environmentally Preferable Purchase) movement happening in the supply chain space

 

Advice for manufacturers:

  • How do you reduce downtime?
  • Think about helping institutions to compute the TCO
  • Analyze error logs and fix errors. Provide backup capabilities.
  • Have a real value dossier with all the stuff discussed above ready for the value committee.

The two-phase iteration

Developing software for medical devices implies two special aspects:

  • Documentation becomes important. Regulations require a ton of documentation, and a failure to do so might hamper the market access of your medical device. Conclusion: documentation becomes a part of the definition of done and you have to get some distance from the second principle of the agile manifesto (favoring working software over comprehensive documentation). The problem is, developers in general hate documentation (but have all kind of justifications to make it appear as a conscious choice, such as “documentation is always outdated” or “just read the code”). But when you have to do something that you don’t like and there’s no way to get around it, I believe you should get at it and do it on a regular basis. Don’t postpone documentation until the end of the project, when people leave or are assigned to new endeavors, and three quarters of the knowledge has evaporated.
  • Leaving bugs is not an option. And an excellent technique for finding bugs is manual testing. In my best-effort projects as far as automated testing is concerned, we still find 7 bugs a day through manual testing. And some tests are just too complicated to be automated (example: recovery after a power outage, intrusion testing). Manual testing has a very direct consequence: at some point, you have to deliver a version, testers will test it and it will take time, and when they put their hands on the next version, they don’t want it crippled by regressions due to new developments. Manual testers don’t have the patience of a continuous integration system.

 

A good way to adjust to these constraints is to use two-phase iterations.Phase 1: build. Phase 2: stabilize.

Build & stabilize
Build & stabilize

To elaborate a bit more:

  • Phase 1: construction.
    • Build a product increment. In this phase, you take risks. You write that ambitious new feature. You refactor that awfully complex engine. You change the build system.
    • You maintain quality (automated tests, bug fixing), but it’s not your main concern. You might let bugs and broken tests pile up a little (but not too much).
    • Little manual testing occurs (only functional challenges by those who wrote the specs, weekly general regression testing), but testers prepare for the next phase by honing test procedures and test strategies (according to impact analysis).
    • The final days of the construction phase will be a little constrained. There has to be a feeling of deadline around this date. Everybody works hard to be on time.
    • The most concrete consequence of the end of the construction phase is that a stabilization branch for this iteration is created.
  • Phase 2: stabilization.
    • Finish the product increment.
      • Testers test.
      • Developers fix bugs and broken tests. They are not allowed to take risks on the stabilization branch.
      • Several versions are issued and tested until the last one meets quality standards (I like to set a low total known bugs threshold – more on this in a dedicated post).
    • Write the documentation. It’s a good time: after the rush, and while things are still fresh in everyone’s head. Write the mandatory one-shot documentation for the iteration (test report, formal reviews…). Update long-term documentation (e.g. architecture documents).
    • Prepare for the next iteration. Select the candidate user stories. Have functional people explain them to developers. Extract the requirements related to the user stories, make sure that evil details are taken into account (remember, at that time, specs should be ready. The months of talking and studying and analyzing features are over. If the spec is not ready, then the feature is not mature enough for this iteration. If you can’t write it down, you can’t code it). Developers and architects should throw their first design ideas one whiteboards and start negotiating solutions. Then formal planning occurs (detailed planning poker with tasks). Product Owners compare task estimates with personnel availability and estimated throughput, then choose which user stories will be part of the iteration, and which will not.

 

Two-phase iteration
Detailed activities inside the two-phase iteration

Practical considerations

  • Iteration size. Although Scrum advocates for 2 to 4 weeks iterations, for this kind of process, I’ve experimented with values between 4 and 8 and settled for 6 weeks. This seems to me a good compromise: big enough so that content matches iteration overhead (documentation, planning, manual test campaign), small enough to be manageable. Of course, this value works in my environment, you should try several and see what works for you.
  • Phase size. My biggest project started with a 2/3-1/3 proportion (that is, four weeks of construction and two weeks of stabilization). Then, as maintenance cost increased, I increased stabilization level to 12 days (leaving 18 for construction) and plan to increase it again soon. It may seem long, but in my usual environment it takes about 4 deliveries to get a software version of good enough quality. This is a key feature of the two-phase iteration: by adjusting the relative size of construction and stabilization, you have a built-in mechanism for regulating speed and quality. More on this in a dedicated article.
  • Be tough with construction end date. If a user story is not ready, then it’s going to ship in the next iteration (Of course, if everybody at system level waits for it, you may want to be (or be forced to be) a little flexible. But if that user story is so important, why did it slip until the end of the construction phase? Couldn’t it have been planned earlier? You should always have a buffer of user stories or technical tasks of lesser importance ready to be sacrificed if something important goes out of hand).
  • Be tough with quality at the end of stabilization. If the product is buggy, it’s not shippable. Immediate course of action is to fix it and ship it. The following construction phase will be smaller than usual, with less features. That’s a smaller problem than a buggy software – your users are more important than your bosses.
  • On the day stabilization phase begins, a stab branch should be created in the repository (named, for example, iteration_XX_stab). Why?
    • Dev on trunk/master: sometimes it is reasonable to allow a developer to start construction N+1 during stabilization N. Example: a huge refactoring with lots of impact, that should rather be performed when the change level on the code is lower (during stab). That developer should work on the trunk/master while everybody remains on the stab branch.
    • Psychological reasons: the fact that they have to switch from trunk/master to stab branch helps developers materialize the fact that activities will be different. The fact that it’s possible to perform minor tasks in the trunk helps keep the stab branch clean (bug fixing only!). For example: fix that bug in the stab branch the quick and dirty way to avoid unnecessary regressions, but merge the fix at once in the trunk, and then refactor it until the design doesn’t make you blush anymore.
    • Version maintenance. Imagine you have to fix a bug for the software version of iteration N is 3 months or 3 years from now: pick stab branch N just where you left it.

Agile medical device system design

The Agile revolution has definitely transformed the way software is built, to such an extent that it has become mainstream because it just works better. There are several factors to such a success: empowerment that helps get the best out of the people; automation that reduces costs, cycle time and errors. But to me, the most powerful practice of the agile toolbox is the incremental product design that reduces risks at all levels:

  • Integration risk: you integrate sooner (all the time, in fact), so the long-dreaded integration phase of the eighties (that could last for years and often end in project failure) is an everyday, routine task.
  • User needs risk: by implementing the most important features first and putting them into the hands of end users ASAP, you gain field feedback on what the users really need and want. You decrease the risk of creating totally useless or partially usable feature (80% of features in software are said to be never or seldom used).
  • Projects risks: by finishing the product often and measuring the team velocity, you know your real project pace and can adjust to it. Your team’s average velocity of the last three iterations is a good predictor of the team’s pace until the end of the project. I’m a big fan of this down-to-earth wisdom of measuring what’s too complex to be predicted and changing course accordingly.

When working on medical device projects with my colleagues from the hardware, electronics, reagent or system teams, I’ve often wondered why they wouldn’t use iterative development to their advantage. The counter arguments they gave me usually were the following:

  • Our iterations are too long. When designing hardware, the time needed to finish plans, order parts all over the world and receive them, test them and send them back for defects once in a while, assemble them, is ridiculously long – up to six months. The same with electronics if suppliers are expected to design and produce boards. Reagent teams may perform stability tests that last for years.
  • Our iterations cost too much. Big hardware prototypes can cost the price of several brand-new cars. Moulds are awfully expensive. Reagent production lines are a luxury item. Physical stuff cannot just be made and destroyed without a sizeable monetary footprint.

These hardships entice specialists to optimize their business with a typical waterfall process: long requirements elicitation, one-shot production of what they think should be made, oops we forgot something, some supplier is late, schedule is doomed. Local optimization is the enemy of the global optimization endeavor that is a systems project. I believe systems design must be iterative and thought as such from the very start.

 

System iterations
Hardware V1 and electronics V2 are combined with embedded software V5 to build embedded system V3. After some integration testing, embedded system V3 is combined with non-embedded software V6 and reagents V1 to perform the first round of tests of the complete system. This will lead to new insights and subsequent changes in the next iterations of all sub-components – a long time before the end of the project.

 

Software item iterations are likely to be always shorter. But that doesn’t mean that other specialties can’t plan iterations too. Some techniques that could be used to make it possible:

  • First hardware and electronics iterations can be made with prototyping material (for example: B&R automation products) that has unrealistic production cost or size but that allows fast creation of first versions. If first tests prove that the design is good, next iterations can focus on production cost, maintainability, assembly lines, multi-sourcing of providers, while the overall systems keeps on its journey.
  • Hardware stubs. First iterations can also use the technique we software developers know as stubs. For example, the first version of an automated and temperature-regulated drawer for reagent storage could be made without temperature regulation at all, and without automation (only fixed-position reagents, hard-coded in the code or loaded in the database via a script).
  • Design and usability are a big concern for marketing departments and regulators as well. I would suggest to meet your end-users ASAP by quickly manufacturing prototypes of all external interfaces. For example, you can use 3D printers or cardboard models or foam models. Have end-user representatives execute typical usage scenarios with it. What do they think? I remember using this technique for a device with a bar-code reader: we printed a 3D version of the casing in a matter of days only to realize that the bar-code reader was positioned in such a way that the end user would have to almost break its wrist to use it. So we moved it to the opposite very easily (no need to redesign all the internal parts of the device, no constraints!).
  • Reagents design is complex and slow. Help these guys by giving them ASAP a system prototype to test their stuff. They don’t care about chassis production cost, cybersecurity or electronic components triple-sourcing. They just need good biological performance.
  • Assembling subsystems is difficult. Something that has never been tested never works. So be sure to plan an integration and system debugging session every time you produce a system iteration, before downstream activities (such as biological performance tuning) can start.
  • As explained by the eXtreme Manufacturing movement, to plan for iterative, incremental system design, the priority would be to think carefully about the internal interfaces of the system and divide it into subsystems. Subsystems can evolve independently as long as they respect the interfaces – thus achieving fast-paced design.

 

ScrumInc eXtreme Manufacturing car
The modules that make up an extreme manufacturing build party at ScrumInc

This is no easy task. But a necessary one to tackle the top risks of a medical device project: biological risk and registration risk.

  • You should produce as fast as you can a functional system to tackle the biological risk – living matter is so unpredictable that you are better off observing how it behaves (just as project dynamics, by the way).
  • And once you have a complete system able to perform its biological task (stripped out of the bells and whistles), i.e. once you have tackled the biological risk, consider handling the registration risk by registering this minimalistic system. This will take lots of time (typically 2 years in China). The registration teams should be able to define the contour of a system that could be registered officially all over the world, but that you probably won’t sell (it’s ugly, it can’t be maintained, it has no advanced software features, but yes it performs its core biological mission pretty well). Meanwhile, you will prepare a second version with all the nice-to-have features that will be registered as a simple product evolution, with lower risk and delay, and that might well end up being available on the market little time after the first version is ready.

Lean Startup thinking promotes trying your concept with a Minimal Viable Product that you put into the hands of your end-users. Product registration authorities are a kind of VIP end-user. Maybe you should plan you entire project plan to build them a dedicated MVP to address the registration risk right after the biological risk is under control.

Guidelines for building a culture that promotes good architecture

Good architecture is essential in medical software, where it helps, in particular, to achieve safety. But architecture, whatever its excellence in the origin, will degrade over time, just as inevitably as entropy increases, it’s a law of the universe: disorder naturally increases, and software projects are prone to disorder (people come and go, requirements change, interacting systems upgrade, products are launched and abandoned, technologie. What we need is a force to constantly fix it and make it better suited to current conditions. The following is a list of practices that managers can use to build a culture that proms thrive and wane). So we need architecture top-notch but it constantly gets corruptedotes an evolution towards a better architecture.

Encourage refactoring

Refactoring is the key practice that will keep the architecture afloat. It is the recurrent part of the force we need: something that comes back over and over again to fix what appears not so good now. But refactoring doesn’t happen by magic. What I believe can help:

  • Allocate time for refactoring in every iteration.
    • It creates a culture where developers know that management cares about architecture. Simple as that. If they pay for it, they care for it. One million time more effective than talk about quality.
    • Technical debt management: as with financial debts, it comes with interest rates; you better pay your loans on a regular basis or total interest will get sky-high – in the worst case leading to project bankruptcy, where you have to start from scratch again because you code base is no longer profitable given the likely project roadmap.
    • Risk management. Refactorings introduce bugs in areas of code that were stable before. So they add risk to your project. As always, you’re better off spreading that risk to avoid big surprises. You don’t want to refactor much right before a major release.
    • Do it now, otherwise you might end up not doing it at all. Don’t wait. The more you wait, the more deadlines and emergencies will convince you to postpone it again. Refactoring is a long-term endeavor. There is no immediate benefit in refactoring. It’s in the “important, non urgent” zone. The difference between good and bad on the long run. And you should do some “important, non urgent” activities every iteration.
    • Refactoring cost acceptance. If you are always refactoring, senior management will get used to a business-as-usual project pace that includes refactoring. They will accept it. But ask for 3 months of refactoring only with no features, and management (especially if it has no technical background) will likely say NO. Your regular project pace must come with quality included, period – remember we are talking about medical devices?
  • Provide a good safety net with automated testing. Developers should be able to run a comprehensive test suite on their refactoring branch, and make sure they didn’t break anything before merging to the trunk. You don’t want them to disrupt the work of others or introduce bugs in the product. I’ve seen projects without tests and where it’s very difficult to predict impact; you know what happens? Developers don’t refactor, or very little. In this sense, automated testing is once again “more an act of design than of verification” (Bob Martin): in addition to favoring loosely-coupled design, automated testing allows design to evolve over time by enabling refactoring.
  • Don’t get too mad with regressions. You can’t make on omelet without breaking eggs. If a bug made it through your testing process, make the testing process better – but don’t yell at developers. They should feel safe to take a reasonable amount of risk. If they don’t, refactoring stops.
Taichung City Cultural Centre by SANE architecture
Taichung City Cultural Centre, Taichung, Taiwan. ARCHITECT: SANE architecture.

On architects

The agile manifesto states that “The best architectures […] and designs emerge from self-organizing teams. “ But when team size exceeds the canonical 7+-2 (typically when several scrum teams have an interaction in creating a bigger product or range of products), I find it useful to entitle architects to perform some key activities:

  • Settle disputes when consensus cannot be reached. Humanity has invented hierarchies to have power struggles settled once and for all – and not to resume on every design meeting.
  • Stimulate and validate good design before development. Having architects reject design after implementation is a tremendous waste. There should be a discussion over feature design before coding. I don’t mean formal reviews with design document approvals: a coffee break and a diagram on a napkin should be sufficient when trust is established.
  • Perform code reviews. They help the architects in know a little bit of everything. They allow mistakes to be spotted earlier. They allow the architects to check that the actually implemented design is what was agreed upon with the developer. If better ideas emerge during review, refactor while the code is still fresh in the developer’s head. Code reviews are an excellent opportunity for mentoring and training: concepts applied to practical cases. It’s good for developers to know that what they commit will be challenged, and that crap cannot make it to the trunk – they will pay more attention. Code review is definitely an activity with an excellent return on investment: many deep things happen in little time.
  • Maintain the one thousand feet view to add a broader context to design decisions. This is crucial in the architect legitimacy (in addition to recognized technical and social skills): somebody worth talking to to make sure local design (which may be excellent) fits well in the bigger picture. When the codebase gets big, the one thousand feet view will naturally get lost. As with code reviews, maintaining this view means, very concretely, that the architect has budgeted time to take care of the code of others.
  • Promote code reuse. Developers tend to reuse less than they could. And they can’t reuse something they don’t know about. Once again, the guy who knows a little bit about everything might prove useful.

On the human side of architecture

On the human side of architecture, I recommend the following considerations:

  • Hire near-architect developers. Make sure, during the recruitment process, that they have good design ideas, that they constantly learn, that they are open enough to understand what other designers think, that they are able to communicate their point of view in an understandable way. Having people with poor design skills and little ability to progress will destroy the architecture which must, to survive, be understood and refactored by every developer in the team. So make sure new recruits will find their way in your project’s patterns and practices. Juniors are a good asset if they have the potential to quickly get up to speed.
  • Architects will show developers that there is a career path for technical people. This might help fight turnover.
  • Good practices for spreading knowledge:
    • Iteration design retrospective: developers explain the design that was actually implemented to their peers, so that everybody has at least a basic knowledge of the recent changes.
    • TechDays: at a wider scale (scrum of scrums), teams present to others a summary of the global architecture of the component or software they are responsible of. This is also a good moment to share about new technologies that teams might use (for example, yesterday, one of us presented the new features of C# 6 that we recently migrated to, which should be used in our context, and which we should be wary of).
  • Hire nice architects. It’s quite common to see architects with a bad attitude. Maybe they feel technically insecure and need to show off and snap at other people to protect their realm, slowly falling into the ivory tower syndrome. But, to my mind, being an architect doesn’t mean you have to be the best developer in the house: you must be one of the good AND have social skills: leadership to convince people, openness to incorporate their good ideas into the architecture, enough altruism to take an interest in their work and give them a hand when they need it. If architects are not nice, people will stop asking questions, communication will dry up, and the bad-attitude architects will end up coding some kind of framework in isolation from the rest of the people and only criticize developers during nightmarish code reviews.
architecture review
Architecture review – why should things be different in software?

Other practices

  • Technical debt backlog. Have the courage to recognize when things are bad or not so perfect, and change them. A backlog is good for memorizing refactoring ideas and prioritizing them. As with the product backlog, you will never implement everything. In fact, that would be bad: as with any human activity, some ideas are frankly inappropriate, and should be dumped. So let refactoring ideas mature for some time. The size of the backlog (ideally with estimates) will give you an excellent idea of the size of your technical debt. It should be monitored.
  • Whiteboards everywhere. Developers should start coding only once they are able to express their design intentions clearly on the whiteboard to an architect and their peers and reach a consensus. Whiteboards are an essential communication design inception tool. If you can’t draw it, it’s not clear enough.
  • Get external architecture reviews. It will give fresh ideas. Human beings can get used to anything. After a while inhaling a code stench, you don’t smell it anymore.
    • Ask new developers what they most dislike in the design, and listen to them. It means something. Especially if several of them agree on some issue.
    • Hire architecture consultants once in a while to tell you what they think. This will give you extra legitimacy to convince your management to finance important refactorings. I’ve had a good experience with such an audit: after an initial denial and rejection phase (it hurts to hear your baby is not perfect!), the team implemented about half of the recommendations, and they proved good on the long run. Some of them were already known to the team, but having somebody else point at them was the sparkle we needed to trigger action.
  • Use architecture verification tools. My in-house development teams successfully use NDepend and its Code Query Language to write architectural rules (for example: GUI layer cannot access DAL layer, methods and classes cannot exceed a certain size, namespace dependency cycles are forbidden, sub-domain A cannot access sub-domain B…). Once in the build, NDepend will shout when a rule is infringed. So these rules will be strictly abided by (corollary: they must be good, pragmatic rule, or they will cost a lot to enforce; be ready to drop them quickly if costs outweigh benefits). NDepend (as well as other code inspection rules) is so obtuse that developers soon learn they cannot get away with it; they will painfully internalize the rules in such a way that, in the end, the code they produce will no longer violate the rules – they will almost cease to be annoyed by them. So basic rules will be automatically enforced. This is excellent for the architects and their relations with developers: code inspection tools play the bad cop role, architects play the good cop role. Architects help developers solve rule infringements, leading to better team spirit. And architects bandwidth during code review is best used when repetitive stuff has already been taken care of.
NDepend
NDepend usage example

Agile specification for medical devices

For agile medical device designers, specifications are hard to grasp because of (yet another) contradiction between two seemingly opposite viewpoints: laconic user stories handwritten on post-its, as promoted by the agile movement, versus validated, traced, systematic requirements, as required by regulations. In this post, I’ll try to reconcile them by analyzing why specs are useful and give practical tips on how to handle them (people, lifecycle, tooling).

johnson27s_specification2c_1857

 

User stories are insufficient. I love user stories at the start of a project to understand the main features as input to architecture and overall project effort estimation. But they prove insufficient for implementing medical devices (where full traceability between test cases and requirements are mandatory) and systems (people from the hardware and system teams need some firm anchor points in the expected software behavior to do their job – their cycle times are so long they can’t cope with never-ending change). Formal specifications are a must-have for medical software design. They are a more detailed view of business needs than user stories (if I look at my projects numbers, the typical order of magnitude is ten requirements for one user story), and typically linked (via a traceability system) to the user story they explicit. User stories and specifications are two forms of user needs formalization that serve different purposes.

Specification team. Specification and development need different mindset and aptitudes. Good business analysts have excellent social skills: they enjoy talking, they make other people instantly feel at ease and tell the short story long, they ask redundant questions to spot contradictions, they have excellent writing skills to express complex stuff in a concise, understandable, testable manner. Developers, on the contrary, usually enjoy sitting alone in front of a computer, immersed in a delicious sea of abstraction where things are perfect and rules are absolute, in a state of intense concentration – the so-called zone – from which they reluctantly and grudgingly get out, incurring high task-switching costs. People skilled in both fields exist, but they are quite uncommon. These activities entail so different states of mind that I recommend specialization. In addition, developers are often harder to hire, thus more expensive than business analysts: scarce developer time should be used wisely.

People.jpg

Specifications lifecycle.

  • Specifications are to be written before coding. It’s an order of magnitude cheaper to change specs than code. The idea behind specs is to see the big picture before getting into gory implementation details, spotting far-reaching consequences with high impact on design very early on. In my experience, business analysis should start two or three months before coding, to let time for the dust to settle, for people to reach consensus, for obscure details to be fully understood.
  • A good technique is what one of my teams call SpecDays (as a pun to Microsoft TechDays): a little slideshow showing (hopefully with diagrams and mockups) a digest of every feature to be implemented soon, so that each team member has a basic knowledge of it.
  • One developer should thoroughly analyze every spec. Developers are excellent at spotting contradictions and neglectedly dealt with edge cases. More importantly, with their knowledge of software design, developers can estimate the implementation cost of every requirement, and maybe negotiate one or two tough ones out of the spec to maximize bang-for-the-buck. Moreover, this study of the spec will greatly improve the precision of the estimation of the tasks required to implement the user story at planning poker time.
  • The spec should be mature at the start of the iteration where it is implemented to avoid costly rework. But it shouldn’t be formally validated. A couple of requirements should still be allowed to change during implementation – especially if changed is asked by developers when encountering some unforeseen hardship. If marketing changed its mind about a feature during the iteration, I would tend to refuse the change request – it would need additional analysis before implementation.
  • One very effective practice is challenge by functional people: as soon as a developer has something more or less working on his or her computer, the person who wrote the spec and ergonomics comes on the developer machine and plays with the app. This is a fast way to ensure that the spec was good (errors found then are late, yes, but they could be later), ensure that the developer has correctly understood it, and find a couple of bugs on the spot. All this before committing to the CI.

Herramientas.jpg

In medical devices, because of the need to have requirements detailed enough so they can be traced back to test cases, you might end up with a huge amount of them. For example, I have a project with about 4 000 requirements. Tooling is paramount to manage that sheer quantity. Don’t write specs in a word processor: find a good tool that can get you somewhere by providing automation facilities. For example, in Doors, my teams export traceability info to generate a traceability matrix and use custom fields and views so that there is spec reuse across projects to match the underlying reuse in code.

Another plus of specs is to be understandable by people outside the development team (marketing, technical writers, support). If you don’t write specs, be prepared to spend a lot of time repeating what the software is supposed to do.

Architectural ideas worth considering for medical devices

Some architectural ideas are quite classical in the medical device software world, and should be considered in the early stages of a medical device project. I collected some:

  • Split the software to isolate the riskier features. For example, in a radiography device, all code surrounding the manipulation and the radioactive substances, their emission, and alerts around them should be split apart. You want to keep that part small to review it thoroughly and keep complexity low to avoid bugs. In addition, there is a huge overhead implied by class C 62304 requirement – you want to avoid that overhead in risk-free parts of the app.
    • On the opposite site, risk-free zones (such as the GUI, provided it takes no decision and no memory at all) should be split from the rest to be restarted at will in case of failures. And GUIs are doomed to mutate forever (to stick to UX fashion, and to give marketing opportunities for more product launches) – you don’t want to validate that automation impact on biological phenomenons again for the sake of an update in color palette.
  • Isolate real-time automation from the rest. Real-time or near-real-time is tough to get right. It will typically require lower-level languages (C, C++, PLC…) and maybe a special OS (RTOS) or RTSS (Intime, RTX, Preempt-RT…).
    • Having several OS may have an impact on electronics (another PC, a dedicated board…) and production price. This is a far-reaching decision that has to be taken wisely and early.
    • Low-level languages typically imply a lower productivity (C vs C#). And this part of the code will more or less follow the development lifecycle of the hardware: slow to start, a nightmare to tune and fix with all edge cases and recovery mechanisms, and then nothing – once the device is out, this code will have few reasons to change. But the higher-level part will always be changing – adapting to regulations, markets, healthcare network protocols.
    • Added bonus: isolating what’s not directly linked to the hardware will make a good basis for reuse on another machine.
    • Another extra for the road: you will need to emulate the hardware (to simulate rare conditions, to minimize costly and scarce real hardware usage, to speed up tests, to avoid being blocked until the hardware is ready), so have it clearly isolated to mock it through a simple interface.
  • Isolate components with cybersecurity risks. What’s in contact with networks and USB will typically be the entry point for attackers; therefore, it should have minimum rights – so a successful attackers cannot get much further.
  • Beware of networks. Calling third-party web services is a nice idea for, say, a climate app. But for medical devices, beware. Imagine there’s an earthquake or a war – a situation where the internet might be working very slowly, and people requiring urgent attention pouring into hospitals. Medical devices have to be working no matter what. So code that Clinical Decision Support algorithm locally.

Classical separations

  • Isolate tools. I may sound obvious once more. But don’t ship all these R&D tools (simulators, tests, low-level system testing routines…) in your production code. Medical Devices don’t need one more reason to fail. And keep in mind that these devices may be maintained in the field by versatile technicians that may have basic knowledge of computers and mess with the device if they can get a hand on powerful but unsafeguarded programs.
  • Design for testability. It has become mainstream, but I still see projects who avoid automated testing. My guess is their managers think automated testing is costly – and yes it is! In my experience, deeply automatically tested software costs about twice to implement. But you gain so much in horrible debugging time (who likes debugging? I’d rather write tests…) and by enabling refactoring by providing a safety net. And are we serious about writing safe and reliable medical device software, or are we not? You can’t be if you run away every time a quality-related activity seems costly. But to be profitable (and keep costs in a reasonable zone), automated testing has to be thought on the long run. And code has to be architected in a testable way from the very beginning. My typical advice here would be to heavily use dependency injection, mock all hardware-related components and network interfaces, run them you’re your CI server and integrate the test results into your traceability matrix to give them legitimacy).

Process hints for architecting safe medical devices

Medical Devices are meant to be reliable and safe. Architecture is key to achieve this. A sound architecture driven by risk analysis will mitigate disasters and their consequences. And a good architecture (especially if that quality is maintained over time) will make a software with fewer bugs, easier to spot, and easier to fix without regressions. So how do we approach medical device architecture?

 

croquis

 

  • Spend time on initial design. I know, Agile scorns BDUF (Big Design Up Front). I think BDUF should be avoided in the sense of defining UML diagrams for every class in the system needed to implement those 5000 requirements. But if you really want to mitigate risks and maximize reuse, some separations are to be really considered at the very beginning – because they are a lot more difficult to achieve later.
  • Stay pragmatic
    • Beware of dreamed features. I’ve been amazed, in the past few years, how much different the actual evolution of the platform I was responsible for was what we thought at the beginning. Projects come and go, partnerships change, markets mutate. So keep YAGNI (You Ain’t Gonna Need It) in mind. But don’t fall in trap of getting blind and miss the chance to prepare for changes that will really happen in the future.
    • Change the architecture when requirements change. Change the design when developers think a particular area of code is smelly. Nothing is sacred. Everybody gets things wrong once in a while, even rock-star architects.

 

Tracer bullets ricochet off their targets as Japanese Ground Self-Defence Force tanks fire their machine guns during a night session of an annual training exercise in Gotemba

Prove the architecture

  • With prototypes. I love the Tracer Bullet project pattern: on your first iteration, implement one only, simplified, core feature of the app, that encompasses all layers and components of the architecture (the metaphor is that this practice, just as a tracer bullet in the army, gives the team enough light to understand the landscape and correct fire in real conditions). Once you have this feature working, you know the architecture works (well, you don’t know yet how it will sustain change over time, edge cases and loads, but you know at least it’s not an impediment to development). Before that, you just hope for it.
  • With Stress tests and load tests. They are good judges to validate an architecture. I’ve often been flabbergasted by how much harder getting those tests to pass is, compared to what the team expects. You find so many rare, hard to reproduce bugs during those tests. You don’t want them to jump up in production – always at the worst possible time, by their very nature. As those tests might reveal deep problems rooted to the architecture itself, thus very costly to change, they should be performed as soon as possible.
  • With reliability tests. It’s always interesting to control software behavior when things fail. It is especially important when the software might save someone’s life – or ruin it. You should have strategies for handling every kind of failure: network failure, other software failure, device hardware failure, computer hardware failure, OS failure, power supply failure, cyberattack, and even failure in your own software. It’s not in the norms, but you have to go beyond norms as far as the real thing is concerned. And as everything with software, it doesn’t work until it has been tested. So test it. Make sure you don’t lose data in case of blue screen of death (they can be provoked on demand thanks to special drivers). Make sure you don’t lose data when you turn off the power switch (we had to disable several caches to make it work). Make sure you don’t loose biological result when the GUI goes mad (we set an automated to test to kill the GUI during load test).

Handling quality-related records in practice

Agile medical device software developers must solve a contradiction between two seemingly opposite philosophies:

  • From an agile perspective: go fast, experiment, deliver frequently, embrace change
  • From a regulatory perspective: produce auditable documents, double-check everything, make plans.

These philosophies have indeed been opposed very often (Apple and Google complaining that medical regulations slow down innovation, auditor being very suspicious of early agile projects). See AAMI TIR 45 for an enlightening discussion on how to reconcile them.

The rest of this post is focused on practical devices on how to cope with quality-related records so you don’t waste your energy.

 

Automation of the production of recurrent documents

automation

There are two categories of quality-related records

  • One shot documents, or only requiring minor updates (management plan, quality assurance plan, maintenance plan…). Do them once and for all, early in the project.
  • Recurrent documents (specs, test plans, test reports, design document traceability matrices…).

Recurrent documents are strategic since their repetition (especially in an iterative development process) will multiply the load required to produce them. In developed countries labor is expensive and cannot be wasted. What can you automate?

Adding_machine,_1909

  • Use a document lifecycle management tool for handling validation processes and versioning. They take care of ensuring proper signatures, notifying interested people, and most importantly act as a safe to protect your documents for the crazy time required by regulations (I heard 7 years after the last device is sold, after typical project times of several years: we’re talking in decades!). Odd as it may seem, I stumbled upon a project in 2014 where project documents and procedures were still manually signed. That’s a guaranteed recipe for losing documents and having holes in your validation process that exhilarating auditors will love to spot.
  • Use a spec and test tool to handle your requirements (I’ve been using Doors quite successfully for example, but other good tools exist). Benefits :
    • Be a platform for further automation.
    • Factor out repetitive document introduction, definitions…
    • Handle traceability
    • Share common requirements, risks, risk mitigation measures, tests plans across projects. Especially useful when you share code.
  • One of my teams wrote a tool to gather info from Doors (requirements, risks, risk mitigation measures, tests plans, executed tests plans) and from the software factory (automated developer tests results, automated GUI tests, automated stress and robustness tests) to generate a full traceability matrix. This matrix is required by regulations (to make sure every requirement has been tested), but it’s very useful to the team. Only when a requirement has been successfully tested can I be sure that its implementation is done. So this matrix provides good metrics to analyses project progress. It helps to pay extra attention to risk mitigation measures: by identifying them as special kind of requirements, it is easy to track how many are not yet implemented, or have their tests fail. Automation is the only way to go with traceability matrices when there are thousands of requirements, manual test cases and automated tests.
  • We have a project (not yet fulfilled) of generating a list of dependencies and versions by analyzing the Nuget package.config files.NuGet-Logo-2

 

Document lifecycle

In an agile team, documents are long-lived and evolve constantly. In fact, I believe a document should be considered correct only once its data has been used by the following process (e.g. a spec is correct once it has been implemented, an architecture document is correct once load and stress tests pass, a test plan is correct once it has been executed). It’s a fact of life. So don’t get stuck waiting for document approval in the general case. Instead, work on everything in parallel and have people collaborate – they master how to optimize complex, fine-grained interactions better than any process can.

Yeast_lifecycle_gl.svg

Metaphor for real-life document workflow

Write documents at the time when they are useful. I’ve seen projects blissfully ignorant of regulations until the end, where the documentation required by regulations is hastily written. This is nonsense. Minimizing doc authoring effort can be the enemy of project effort minimization. Quality-related documents are often very useful, if written properly, at the right time.

time-1024x652

  • Write those documents framing the entire program (such as high-level marketing needs) very early. They are likely to generate a lot of heat (political struggles) in the enterprise and take a long time to stabilize (when someone has won the battle). It’s very risky to start developing before – but it’s a good time for feasibilities, finding and tuning the right process, choosing tools and languages, writing foundation frameworks, hiring teams).
  • Think about architecture and risk analysis at the very beginning, when things are easy to change. Write it down in documents to set a clear vision that may be lasting for years. These documents will be read by every newbie joining the team, saving days of training for the architect – more time than required to write and maintain the docs.
  • Coding guidelines (hopefully enforced via tools) are to be enacted at the very beginning of implementation – if not, you will have to refactor the existing codebase to abide by them.
  • Specifications are to be written before coding. It’s a lot cheaper to change text than code. If you can’t write a sentence explaining what the software is supposed to do, it means you are not sure yet. Developers often think they know what the program should do – except they lack intimate client knowledge and perspective.
  • Manuel test plans are to be written before they are executed. Free testing is a powerful tool, but this is another story.

 

Document approval

ME_227_Approval

  • As far as I understand, for most documents, the minimal approval process is one author, and one other person playing the roles of reviewer and approver. That should be the default approval process to minimize waste. I’ve seen documents with more than a dozen people involved in the review process. Guess what? Everybody feels it’s useless to review the document because the others will spot errors. The review process ends up being more shallow than with one only reviewer – but fully responsible and committed.
  • The most efficient reviewer is the person using the document data as input data: he or she has to carefully read it anyway, and has the skills to really understand them. It should be the reviewer and approver of choice.images
  • As documents change until the work is done, validate documents only once the job is done – the end of the iteration (notable exception: project management plans, high-level marketing needs).
  • Validating documents once or twice a year should be sufficient (provided you explain it in your project management plan) if your validation process is costly (for example if you have a manual, paper-based validation process, or if your document lifecycle management tool has poor ergonomics and performance). You can’t waste time validating them at every iteration.

Regulatory Quality vs Product Quality

I have found very valuable over the years to make a clear distinction between Regulatory Quality and Product Quality. Regulatory Quality means you can handle to authorities a documentation package that proves you have followed their norms and standards. Product Quality means users like the features and ergonomics, that there are very few bugs, and especially none in any area that can harm somebody.

Quality yin yang

There is no equivalence between the two concepts. While Regulatory Quality can have a very positive effect on Product Quality (62 366 will definitely help to define sound ergonomics, 14 971 will help keep risks under control), it is definitely possible to issue a very bad product (ugly, full of bugs, poorly architected, with silly features) but still hastily write a nice retro-documentation that will fit the bill. Conversely, many companies write excellent software in the consumer area (websites, video games, operating systems) without using our norms – have you ever heard of a project team outside the medical device world saying: Hey, I’ve been using 62304, it’s incredible how more productive we are, how lesser defects we find, it’s awesome, check this out! I’ve always found disturbing to think that practices that seem to me so crucial for good software – refactoring, automated testing, automated coding standards, load tests – are not emphasized in norms, or worse, blissfully ignored. Maybe it’s a good things – there are legacy projects out there that certainly couldn’t use these techniques, and norms are handicapped by the least common denominator syndrome. But this demonstrates that Regulatory Quality and Product Quality are two different things.

Both areas are judged in very different ways. Auditors will generally not read the code or execute the app, because they would have subjective judgment on projects, which is not tolerable – only compliance of audit trail documentation to a norm is an objective criteria. But users are not objective. Users have a feeling about your product. They will hate in their guts every too-well-known bug, they will comment that impenetrable screen to every fellow user.

So my recommendation would be to treat them separately. Provide auditors with the documents they need in the way they like. But don’t stop there. Sure, medical device norms and processes will definitely help Product Quality – especially if you perform these activities early, honestly, with the right amount of energy. But they are not enough. Norms and standards take years to reach consensus, be validated, be widely implemented. The software world changes much more rapidly. Every year, new practices, new languages, new architectural styles emerge everywhere. You should stay tuned on what’s happening out there and try to apply it in our regulated world.

Sat-Bild der Woche/ San Francisco Bay Area/ USA
Satellite view of the Silicon Valley

A nice advantage of splitting these concerns is to maximize efficiency. Regulatory Quality implies the heavy burden of document templates, approval processes, tool validations, and so many activities that are meant for the auditor but not for the team, are a strong incentive NOT to experiment, take risks, fail, start something new. So what’s in the realm of the audit trail should be kept to a minimum. And there should be another underground, agile world were lots of good practices are used for making good software. The downside of this is that the auditor will never know of all these good things we do, that he/she might like. But if we’ve done a good job in preparing our quality-related records, he/she will be happy – if not, you have a problem.

Handling regulations

The medical industry is heavily regulated. That’s because bugs that kill people are to be handled with definitely more care than bugs that force a web page to reload. But guess what ? That’s good for established manufacturers – barrier to cheap and fast new entrants. Stop complaining about regulations, adapt to them, take advantage of them.

Carefully study regulations, norms and standards. They change all the time. New countries write their own (Anvisar, CFDA…). Worldwide manufacturers must infer from them a meta-regulation that bundles the worse (i.e., the more stringent) of them all and that is relatively unstable, because it changes when any underlying regulation changes. Usually, organizations set up RA (Regulatory Affairs) teams for that purpose.

But don’t let specialized quality teams write procedures. Procedures must be written by the people who execute them (with proper RA supervision) if you want the interpretation of norms to be productive (fast to execute, lean, no waste) and adaptive (changing frequently). It’s easy to ask for a stupid, lengthy, repetitive task when you’re not going to do it yourself.

regulations

Having a 6 months approval procedure for procedure changes with 10 senior managers involved will definitely discourage change. The procedure for writing procedures must enable evolution and empowerment.

Challenge regulations. Sometimes they can be interpreted in a variety of forms.

  • Take for example NF EN 62304, that presents software development activities in a numerical order, subtly implying you should follow the evil waterfall model. But it is not explicitly written. It took AAMI TIR45 to explicitly legalize Agile.
  • Regulations never talk about the amount of work to be done. 2 pages or 200 for a document ? Challenge your impulse to be thorough. From what I’ve heard, auditors get mad when something is totally missing, but are open to negotiation when it’s small. You can be lean by providing the bare minimum if you don’t find the activity really useful – but a have a rationale ready to justify your priorities.
  • Challenge RA people. When they say developers should add a best practice because of « regulations », ask to read the text of the article of the regulation that really puts a constraint. Always come back to the text – it’s the core principle, it’s the real constraint. It’s too easy to invoke a hazy « regulations » to justify any excessive demand. If it’s not mandatory, if we’re talking about best practices, then it must be decided by the development team. Best practices are only known by people who practice. Just to bring the point home: whenever you feel something brought up by “regulations” doesn’t feel right, always come back to the test, and challenge its interpretation.

Rules_and_Regulations

Remember, regulators don’t want you to drown in papers – they want medical devices to be safe, and incidentally their design to be auditable. They are reasonable people. If something seems completely silly, there must be a more sensible interpretation.

One useful technique my teams use is to write regulations as a spec, and trace its implementation to our specs and risk mitigation measures. Works well for technical guides such as CLSI AUTO9 and CLSI AUTO11. Going fully traceable by writing procedures as specs as seemed a little excessive to us, but why not? The good thing about this technique is you can challenge any legal obligation, and it can help you in case of an audit, by capturing your decisions towards regulation implementation, and by showing off how organized you are towards them.