Archives for category: Uncategorized

You, my colleagues, are world-renowned researchers, entrepreneurs, educators, and leaders.   I’d feel presumptuous offering you my thoughts on how to pursue research, entrepreneurship, etc.   I might as well share my thoughts on basketball with Coach K.   Thus, I’m going to focus on an issue that I perhaps spend more time thinking about: why I love my job.  And yes, I do love my job – and not just because I can sit outside with a cup of coffee discussing new research ideas with brilliant students and call it “work.”

As professors in a department like Duke ECE, we have an unusual ability to shape our jobs.  We get to choose our research topics, our collaborators, our students, and the classes we teach.   We control our research group structures, and we can adjust how much time we devote to each aspect of our jobs.   We can even control how much time we devote to work and which hours we choose to work.  

Given that there is more that I’d like to do than I have time for doing well, I have tried to shape my job such that my time and energy are spent on the activities that I find most rewarding.

Mentoring Students

If I had to single out the part of my job that is the most fulfilling, it is seeing my former students succeed.  To paraphrase a co-advisor of mine, our most important products as professors are our students.  We have papers we’re particularly proud of, but we end up being most proud of our students.    

I have had the great joy of mentoring many wonderful PhD students and undergraduates.  I enjoy this part of my job and thus try to maximize my time for it.   (I have had a few students who were difficult and not terribly enjoyable to advise, but they were thankfully in the minority.)  I have intentionally kept my group fairly small—4 or 5 PhD students and 1 or 2 undergraduate research assistants—so that I have enough time to devote to each of them.  My meetings with my students are usually the highlights of my week, and I don’t believe I could manage more students and still have the same kind of relationships with them.  I fully realize that there are faculty, including colleagues here in Duke ECE, who can have more students and easily find time for all of them; I’m just not one of them.  Perhaps my group size will increase when my kids are older and want less of my time.  My group size decision is a conscious trade-off: there are certain projects that just can’t be done with a group of this size.

Teaching

I very much enjoy teaching students who are genuinely interested in the topic, and I’m happy to devote time for that.  I still get a kick out of that moment when the students realize that computers aren’t magic.   At Duke, I’m extraordinarily lucky to teach a lot of superb students and relatively few slackers.  I have found it tremendously rewarding when former students contact me or visit me to tell me about how they have used what they learned in their jobs and in graduate school.   I do worry about eventually getting bored teaching the same courses over and over, but I hope that alternating among four different courses helps to stave that problem off.

The only part of teaching I dislike—other than grading, which I absolutely hate and thus delegate to TAs—is handling the requests for extensions, special consideration, make-up work, etc.   Early on I discovered I could solve this problem by having a single one-size-fits-all policy (summary: “No”), and I could live with being considered “inflexible” (or other less tactful words one finds in one’s teacher/course evaluations).  Then I discovered that I could pitch this policy as fairness, since how could I possibly judge the relative merits of 50 different excuses in a way that’s fair to all?   Students bought that explanation and it has saved me time that I can devote to other activities.  The only drawback is receiving less fodder for my Fault Tolerant Computing course, because I’m no longer told about as many instances of computers failing (or being stolen or possessed by gremlins) the night before assignments are due.

Service: Committees and Undergraduate Advising

Service is a necessary part of the job, but I doubt there are many of us who joined the department because of a burning desire to serve on the graduate studies committee.   But, given that service is necessary, the key has been finding service jobs where I care about the outcome and where I may have something unique to offer.  I was (relatively) happy to serve on IT committees because I rely on the IT infrastructure here and I’m often among the first ones to break a new service.  I was (relatively) happy to serve on a joint CS/CE committee because the relationship between CS and Computer Engineering (and ECE, as a whole) is important to me.  I was (relatively) happy to serve on faculty search committees because I care about who my new colleagues will be.  All of these service jobs were preferable to alternatives, some of which I wasn’t clever enough to escape.

Early on, I discovered that undergraduate advising was a bimodal experience.  I enjoyed advising ECE/CS double majors, and it was rewarding to be able to offer them useful career guidance and connect them to colleagues in academia and industry.  However, it was miserable to advise BME majors, because I had nothing to offer them.   A typical student question was something like “So, Prof. Sorin, what do you think of BME 273?”  I don’t think about BME 273.   (I offered less grouchy, but no more insightful, responses to the students.)   At one point it occurred to me that I could ask promising ECE/CS double majors in my classes to switch to be my advisees.  The key is making sure I take on enough new ECE/CS advisees so that the DUS doesn’t think I need any other advisees.   I tend to end up with perhaps a few more advisees than I’d otherwise have if I was passive, but the advising is far more rewarding this way.   (Yes, I realize that this approach isn’t scalable, so don’t all start doing this!)

Lifestyle

One of the many aspects of Duke that drew me here was seeing that faculty could be highly successful and have families and lives.   I cherish the ability to leave Duke early and be with my kids from around 4:30 until their bedtime, knowing that I can get my work done later.    I consciously categorize my work into “must do at Duke” and “could do from home.”   I like that you, my colleagues, care about what a faculty member accomplishes without needing to compete to see who can spend the most hours visibly working at Duke.

Personal Freedom vs. Departmental Service

So everything is great, right?  We can shape our jobs in a way that we can be successful and apply our limited time and energy towards the activities we find most rewarding.  But what if we all shape our jobs to suit ourselves?  One problem is that we’d be unlikely to have a DGS, given that few of us would prioritize being DGS in terms of how rewarding it is (compared to, say, research).  OK, so we can “incentivize” jobs like this with perks, although I’ve not yet figured out what perks would be required to persuade me to take on that job.  But what about serving on committees?  What about teaching large, core undergraduate courses (instead of small graduate seminars)?  The operation of an academic department like Duke ECE fails Game Theory 101; the incentives are set up in a way that almost completely discourages time spent on service.  We are rewarded for our research contributions—and, to a somewhat lesser extent, our teaching—and thus time spent on service is clearly counter-productive.

So why does our department (usually) function well despite the presence of disincentives to service?  It’s because of you, my conscientious colleagues.  We tend to do “the right thing” even when it’s not in our interests.  It certainly helps that departmental leadership publicly recognizes our service, but I doubt most of us serve on committees because we’ll get recognized for that service.  It seems like we should need to readjust the incentive system to encourage service, but I have not reconciled this goal with the justifiable all-importance of research. 

Concluding Thoughts

Many colleagues of mine at other universities have left academia to take jobs in industry.  Although some of them left for the understandable reason of wanting a new challenge or for financial reasons, others have told me about how they couldn’t find enough time to do the parts of their jobs that they enjoyed the most.  I try to keep that thought in mind when I find myself drifting (“hey, that looks like it could be fun”) instead of being strategic with my time.   Perhaps that’s why I still love my job.

Advertisements

The recent acquisition of Advanced Liquid Logic (ALL) (http://www.liquid-logic.com/) by Illumina, Inc. of San Diego (http://www.illumina.com/) on July 23, 2013 capped the translation of Duke University research and inventions to a major player in the gene sequencing market place. According to a press release (http://investor.illumina.com/phoenix. zhtml?c=121127&p=irol-newsArticle&ID=1840193&highlight= ) “ALL… has developed a proprietary “digital microfluidics” technology based on electrowetting that precisely manipulates small droplets within a sealed disposable cartridge to perform complex laboratory protocols. This proven technology will enable Illumina to deliver the simplest and most efficient sample-to-answer next-generation sequencing (NGS) workflow.” Whereas gene sequencing is currently done primarily in research institutions, ALL’s technology conceivably could open access to big clinical markets by providing an automated sample preparation front end to Illumina’s awesome sequencing tools. Who would have thought…?

I say this because while our research in microfluidics had good potential for performing miniaturized, complex laboratory procedures, two things would make the events of July 23 rather improbable: 1) a poor track record by microfluidics companies worldwide in commercializing devices, and 2) a poor track record and infrastructure at Duke that would encourage university faculty entrepreneurs and assist the translation of patents and research ideas into spinoff companies.

Poor Commercialization Track Record
The commercialization track record of microfluidics technology in the US was poor – a surprisingly small number of practical commercial devices had been successfully introduced into the market place. Over the past 30 years microfluidic companies have been near-do-wells. Except for ink jet printer cartridges, few microfluidic products had hit a home run! An exception was the iSTAT handheld point-of-care system, launched in the mid 1990’s after burning through about $50M in R&D funding over 10 years. The device used passive capillary flow and embedded sensors to measure blood chemistries. iSTAT’s exit was a successful acquisition by Abbott in 2004 for $392M, and the product is still being sold.

While iSTAT’s acquisition encouraged would-be entrepreneurs in the field, few microfluidic companies have flourished. This is largely due to three things: 1) lack of killer apps, 2) high R&D and manufacturing costs of microfluidic devices to be sold into a relatively small market, and 3) competition from traditional robotic liquid handling systems. (H. Becker, Lab Chip, 9, 2759 (2009)) Thus, in 2000 the microfluidics market was only $34M, excluding print heads and microarrays.

Despite the lack of commercial success, DARPA saw a big opportunity in microfluidics in defense applications and pumped millions of dollars into R&D programs during1997-2002 (Microflumes, Microflips). DARPA’s model was to fund universities who would generate intellectual property, which then would be licensed to companies and presto, translation would happen! Only it didn’t. The costs of getting research out of a university and performing product development were too high. As a result, DARPA would eventually cut funding for microfluidics when no new usable technology emerged to assist the warfighter in the battlefield. DARPA has not funded any new purely microfluidics programs since. Microfluidics had its chance to demonstrate killer applications and translation, and the outcome was disappointing.

No Academic Infrastructure for Translation
Universities, like Duke, had set up licensing entities in the 1990s, like the Office of Science and Technology (OST), to attract royalty revenues and research contracts from companies. Generally, in my experience, this approach in licensing was to hold the line and protect the interests of the university at all costs, rather than to “do the deal.” In addition, it was very difficult for faculty to get patents on. First, Duke’s basic science focus did not encourage patents. Patents didn’t count for much in the university’s appointments, promotion and tenure committee (APT), even though patents were peer reviewed by the PTO. Patents just were not perceived as evidence of scholarship! Second, while OST had been set up to apply for and license patents, unless your Duke patent application came with a prospective licensee who was willing to pay for patent prosecution, your application would likely be rejected by OST. The other disincentive was Duke’s uninspiring patent royalty sharing policy.

It should be evident that in view of the realities in 1998, we didn’t enter microfluidics research with visions of ever spinning out a company from Duke with licensed patents in hand. In the space below I have highlighted some key lessons learned from the 15 year journey of research translation that I believe will be beneficial to others at Duke who have an entrepreneurial yearning. These recollections are my own, and do not represent an official view of ALL or Duke. I start with us going in the wrong direction in 1998.

The Early Duke Work
The late Prof. Allen Dewey at Duke had a vision in 1997 that microfluidic chips could be designed with high integration density using principles learned from computer architecture and VLSI silicon chip technology. He sold this vision to DARPA who funded MONARCH (Microfluidic Operations and Network Architecture Characterizations) during 1998-2002. The goal of this project was to design and evaluate architecture and technology for a reconfigurable microliquid handling system with biomedical applications. As the co-PI my job was to deal with a fundamental problem: how to scale up a microfluidic system in which liquids were driven by micropumps and switched with microvalves, all of which really were not very “micro”. Oh yes, I also was charged with thinking up some biomedical applications, even though my last biology class was in 1958! It was hard work for a semiconductor guy.

What we saw was that workers in microfluidics were trying to implement biomedical operations, like capillary electrophoresis and PCR, on glass or plastic substrates that had etched channels for routing liquids and MEMS (microelectromechanical systems) fabricated pumps and valves assembled on the substrates. Also, electrokinetic flow was starting to become commercialized (Caliper Life Sciences), where liquids were actuated in response to large voltages applied through the liquid along a channel. Crude fluorescent sensing was done through a microscope hovering over the chip.

Designing chips by hand
Microfluidic technology was such that chips needed to be designed by hand – full custom design by specialists. The number of such specialists amounted to maybe 200 designers in the world. However, the tens of thousands of workers who had applications for microfluidic devices had no chip design skills and, thus, no access to chips. This “applications bottleneck” was limiting commercialization.
Our EE perspective saw an opportunity for hierarchical design tools to enable those with applications to gain access to microfluidic chip design by entering the design process at the applications level, or perhaps the programming level. This access could unlock the bottleneck for applications looking for microfluidic solutions. Top-down design rather than bottom-up design! We also envisioned microfluidic chips with “data buses” and processors as used in computer architecture. This is what VLSI chip used – so why not microfluidic chips?

However, as EEs we were in the awkward position of being in a field dominated by chemists and mechanical engineers who understood both the applications and microfluidic technology. Research directions already had been set in motion. In 1998 we were just novices with an EE perspective, which didn’t amount to much at the time. However, taking a different direction that was informed by our experience in a vastly different world (computers, microelectronics, computer-aided design, etc.) would make for an entertaining ride.

At a DARPA PI meeting in 1998 I presented our ideas on hierarchical design and architecture to a skeptical audience that included about 150 of the 200 targeted chip design specialists I mentioned above. What we EE’s viewed as an opportunity was viewed by these chip designers as threatening and unrealistic. First, we were treading on their domain! They argued that the problem of microfluidic chip design was too hard to be captured in a computational scheme with software interfaces for non-specialists. One MIT chemical engineering professor yelled that we should consider architectures based on chemical processing plants, not computers.

Considering the complexities of microfluidics, as it was known at the time, the design of continuous flow chips using valves, pumps and channels was, indeed, too difficult to capture in efficient computational models in computer-like architectures. But a programmable microfluidic processor would be really cool! What was needed, however, was the creation of a new branch of microfluidics that enabled such a radical approach to happen.

Listening to sage advice
Following this PI meeting we were trying to keep from embarrassing our DARPA program manager further, so we kept an open mind to the advice of wiser folks in the field. There was outspoken advice everywhere on how microfluidics chips should be designed. Kurt Petersen, who had founded the most advanced microfluidics company, Cepheid, said that one needs to get away from today’s diagnostic procedures based on fluid boluses. The bolus approach involved placing samples in a reaction tube, adding reagents, mixing, etc, as was done ion a lab. Petersen believed that the bolus approach was too complicated and didn’t scale. “Instead of blindly automating the traditional, inefficient, laborious bolus approach, sample and reagent biochemical processing should be synchronized by a continuous-flow approach in which the biomedical fluids continuously flow through channels, reaction sites, processing sites, and measurement sites.” (K.E. Petersen, et al, Kluwer Academic Pubs., Boston, pp. 71-79, 1998)

Not willing to argue with the likes of K.E. Petersen, we started down the continuous flow approach and burned a year on it. After the first student dropped out and after suffering with architectural ideas that couldn’t be made to work, we determined that the maligned bolus approach was actually a better basis for programmable microfluidics. We simply lacked a way to implement it!

First lesson learned: Struggling with failed research may be a good opportunity for invention.

The dancing droplets
If you are working in a lab, the bolus approach to doing chemistry requires test tubes with people shuttling the tubes from one operation to the next. The chemistry lab is a completely reconfigurable architecture in which any given resource (hot plates, centrifuges, beakers, etc) can be used to perform multiple tasks in any sequence. By contrast, continuous flow microfluidics requires that liquid in a channel flows only sequentially from one resource location to another in one direction. Thus, resource use is fixed and hardwired for a given application. If you make a mistake, there is no going back for rework. If this were the case for microelectronics, we would not have computers!
Thus, the research problem was framed. Since people and test tubes don’t scale to the chip level, you need to be able to actuate discrete volumes of liquids (boluses) and route them through shared resources in a reconfigurable architecture. In computer chips, discrete packets of charge are switched and routed with transistors. But, in 1999 there was no microfluidic equivalent to a transistor that could switch and route discrete volumes of liquids.

The idea for a microfluidic transistor came from a Russian instrument maker who had been a post doc in Cell Biology at Duke. He had a notion of how to move droplets (boluses) on a hydrophobic surface under voltage control. Alex Shenderov was referred to me by a colleague in Biology as someone who knew about liquid actuation using electrowetting and who had filed a patent application on the topic. We had never heard of electrowetting.

The most knowledgeable treatises at the time described electrowetting as “troublesome”, since subtle uncontrolled changes in the liquid/surface interface made this actuation principle difficult to control and too dependent on the liquid’s properties. Also, most of the work in the field was 10-20 years old. Optimistic with our knowledge of modern microfabrication and how to control surfaces, we embarked on building electrowetting-based chips. I hired Alex as a consultant in 1999, and together he and my student, Michael Pollack, built the first working electrowetting microfluidic devices in early 2000. The videos of water droplets moving under voltage control were captivating. DARPA called them the “dancing droplets”, which even got the attention of DARPA’s Director. In late 2000 we put up a web site showing the dancing droplets and had tens of thousands of hits in the first days (http://microfluidics.ee.duke.edu/). Digital microfluidics had been demonstrated. Within a year, over 35 labs worldwide also were working on electrowetting microfluidics.

Second lesson learned: We knew we could get mileage out of cool videos for a while, but the technology had to be understood and applied to real problems. We were the wrong ones to find uses for our technology.

Looking for the killer app
From the beginning we focused on technology qualification by demonstrating microfluidic functions that could be performed on a chip, such as droplet dispensing, droplet splitting and merging, mixing the contents of two droplets, and how fast the droplets could be moved. Would you believe 20cm/sec! Nevertheless, our early focus was on basic understanding.

From this work we demonstrated a microfluidic “toolkit”, which could be used in performing chemical operations on boluses (droplets) of liquid. The new droplet platform drew interest from Prof. Krish Chakrabarty, who saw possibilities in new microfluidic systems and architectures that could be reconfigured on the fly. The concept of programmable microfluidics emerged from Duke in 2001. (J. Ding, K. Chakrabarty, and R.B. Fair, “Scheduling of microfluidic operations for reconfigurable two-dimensional electrowetting arrays,” IEEE Trans. CAD of Integ. Ckts. And Sys., 20, 1463 (2001)) Droplets were akin to bits of data that could be transported over busses and processed under computer control. Microfluidic chips with computer architectures were finally feasible.

Michael Pollack’s dissertation in 2001 would become a “must read” for all future graduate students in my lab. He demonstrated that the technology was robust, but we still didn’t know what it was good for. All we could do was think of the possibilities. One possibility was multiplexed assays. Vijay Srinivasan developed and characterized enzymatic assays on an electrowetting chip. The work was promising enough that Glaxo Smith Kline funded a project to explore electrowetting of simple biological fluids for the development of novel microfluidic assays of protein function, such as kinease assays. Since GSK’s main interest was drug discovery with small molecules, they concluded that our platform would not be useful to them. They also rejected several commercial platforms as well. Microfluidics was still a loser. We continued to scratch our heads for appropriate uses of a cool technology.

The Spinoff Years

Third lesson learned: You can think up uses for your technology, but it’s like pushing on a string.

It is not uncommon for researchers and technologists to try and conjure up applications for their ideas. This is like pushing on a string (supply-side tech transfer). And often times we come up with the wrong ideas. Having someone pull on your string from the demand side will more likely lead to success.

Alex Shenderov would form Nanolytics in 2001 to commercialize digital microfluidics. The company grew to about 30 employees, but eventually failed after trying supply-side ideas underpinned with fickle government funding. Nanolytics would be acquired by ALL in 2007.

Fourth lesson learned: Transferring technology out of a university lab is facilitated by transferring the students as well as the patents.

Michael Pollack and Vamsee Pamula, who in 2002 were post docs in my lab, really believed in droplet technology, so when the DARPA funding cut came, they had much incentive to continue with developing the technology …and eating. Having won the first Duke Start-up Challenge around 2000 (with a wireless mouse product), Michael and Vamsee definitely had the entrepreneurial bug, but were like deer in headlights. It wasn’t until 2004 that Advanced Liquid Logic was founded with two NIH grants in hand. Federal contracts provided R&D funding and responding to BAAs established capability targets for the technology. At this time, ALL’s biggest asset was four former Duke Ph.D. students who themselves transferred electrowetting technology.

With great difficulty, ALL licensed then-issued Duke patents on electrowetting around 2006. Attempts to reach a licensing agreement directly with Duke failed after trying for 1.5 years, so ALL went through Southeast Techventures (STI). STI had been set up during former Pratt Dean Kristina Johnson’s time at Duke with authority to license Duke patents to third parties. Thus STI got the nice licensing fee rather than Duke. Thereafter, ALL would be very aggressive in both acquiring patents and filing new applications, which would give them about 100 issued patents in 2013. Over 50% of those issued patents are jointly held with Duke. ALL essentially bottled up the entire field to become the only commercial player in the US. These joint patents will likely provide a continuing royalty stream to Duke, since they are now owned by Illumina.

While continuing to stay viable with federal funding, ALL also worked to turn prototype devices into stable, cheap, manufacturable systems. It was important to have a robust control box and reliable microfluidic chips in the hands of end users who wanted to try out their own applications. With success, end users might provide the demand-side pull of a killer app. It appears that having a stable, cheap microfluidics platform with good fluidic and electronic interfaces was a key to ALL’s successful acquisition by one of their end users.

Fifth lesson learned: Commercial success depends on a reasonably good idea and a great management team.

Perhaps the most important advice I gave to Michael and Vamsee was: “get a real CEO who has done it before.” As an advisor to the Aurora Funds since 1995, I saw many university spinoffs with great ideas, but with poor management. Often times founding CEOs can do more harm than good. Rich West joined ALL in 2005. He had been founder and CEO of TriVirix Inc. He also was a Duke engineering graduate. With Rich, ALL obtained investor funding without share dilution, set up a Board of Directors and a Scientific Advisory Board, and held down costs. ALL was starting to look like a real company.

Sixth lesson learned: Be prepared to be viewed as a competitor by your spinoff company and with suspicion by your university.

Acquiring patents is an important aspect of commercial success and attracting investors. ALL’s exclusive rights to Duke’s patents on electrowetting meant that our lab at Duke couldn’t do research with other commercial ventures in electrowetting that would “enable” the competition. In fact, in the beginning, our lab was the competition! This competition showed up in applying for federal grants, where Duke and ALL would be applying for the same funding.

The solution was to join forces. In 2005 Duke, ALL and Stanford were funded together by NIH to do a joint program in DNA sequencing on a chip. This type of joint research was the appropriate model for peaceful coexistence. And, it raised Duke’s awareness of what we were doing, although not in a way I foresaw.

As PI on the 2005 NIH grant and co-inventor on patents licensed to a co-PI’s company, the conflict-of-interest alarms sounded at Duke. Oh yes, I also chaired ALL’s Scientific Advisory Board and had options.

Finally, Duke was starting to pay attention to our entrepreneurial activities! In short order a plan was established to “manage” me to be sure I did nothing that would embarrass the university. Whereas I understand Duke’s position, it turns out that a conflict-of-interest plan (COI) would be the extent of Duke’s proactive involvement in our venture. This takes me to the next lesson learned.

Seventh lesson learned: Duke has no evident infrastructure to facilitate the translation of its university research to spinoff companies.

Duke University’s official response to the acquisition of ALL by Illumina for a huge sum of money has been … deafening silence!! For sure, Dean Katsuoleas and my colleagues in Pratt have been excited and congratulatory, and for that I am grateful. But other than worrying about a COI plan, there has been no official recognition that successful commercialization of Duke’s research ever happened on July 23, 2013 at an unprecedented scale for Duke. And Duke even received a check from the deal, as did STI!

I have since learned that Duke’s Office of Licensing and Ventures (OLV) was set up with the express intent to “…translate academic discoveries into commercial products…” by “…direct, daily interaction with faculty, small and large businesses…” (http://olv.duke.edu/index) To their credit, OST and eventually OLV would be helpful in interfacing with ALL’s patent attorneys on issues associated with joint Duke/ALL patents and assignment. Henry Berger was particularly helpful in this regard. But translation of our academic discoveries into commercial products otherwise was done independently of Duke and OLV. It’s seems that ours is the preferred model. Who can argue with success?

While there may be an expressed interest in supporting the growing number of entrepreneurs in the university, the biggest translation in the history of Pratt just occurred without those “daily interactions” with OLV. But, we had a really good conflict-of-interest plan!

Eighth lesson learned: More entrepreneurial opportunities are emerging out of the university, which may fail if Duke intends to continue being agnostic to commercialization by continuing to run a mere licensing shop (OLV) with an uninspiring patent royalty plan for Duke inventors and no recognition by APT.

Keys to Success
Over time, ALL grew in size to about 80 employees. They were very successful in securing federal funding, which paid for R&D and operations. We at Duke were able to join in on funded research that would have been difficult to do without ALL’s stable microfluidic platform. At the same time ALL pursued product development – a user-friendly platform that allowed users to develop applications in their own labs. The key to product development was a reliable microfluidic platform made in a cheap manufacturing technology. Also, ALL aggressively pursued prosecution of intellectual property. They found strategic partners who helped them find applications for their technology. And they were well managed. The acquisition by Illumina will facilitate ALL’s transition from federal funding to a product company driven by one of the hottest apps in town.

I feel like all of those involved in the early Duke research in microfluidics and those involved in more recent technology development efforts at ALL have collectively experienced the ultimate peer review! Granted we all experienced peer review milestones along the way with DARPA, NIH, and NSF as well as the United States Patent Office, but when someone is willing to pay in the range of $100M for access to your ideas, people and technology, well that’s something else.