A retrospective in software development methodology is a look back at a project cycle. Usually, a retrospective refers to a formal meeting held at the end of a development cycle. Retrospectives provide context for where you are, and can help you figure out where course adjustments might be needed.
I’m going to start a series of project retrospective blog posts. My intent is to go a bit beyond the standard “lessons learned” aspect of a classic case study by discussing ways to address those lessons learned if I were delivering on the same requirements today.
I understand there’s some risk in this. The very nature of talking about “what I’d do different today” is a dynamic target. The answer could be different tomorrow, when you’re reading this. Perhaps that’ll be another chance to revisit the topic.
In these posts, I’m going to go over my own project history and do a full post-mortem on whole project deliveries. In order to protect potential client property, I’ll generally avoid identifying the client I did the work for and/or any publicly known project name. Teammates familiar with the projects will be able to identify them, but that’s ok.
Whenever working on a longer-term, non-trivial application, it’s inevitable that before the project is completed, technical debt sets in. Most commonly, some of the components are obsoleted by newer versions of tools used in the project, and there’s no immediate need or budget to update it. As long as those tools are still supported, it’s generally de-prioritized as a problem for another day. Longer term, everything is eventually obsoleted.
Another inevitable part of code development is the classic “if I had it to do over again” aspect. This is a big-picture extension on the old “Make it work, make it right, make it fast” practice. The first time through a project, there’s always “make it work” moves that end up being what gets delivered, without opportunity to get to the “make it right” / “make it fast” cycles. If I had the project to do over again, I’d “make it more right” or “make it faster” by doing ‘X’.
Likewise, every project cycle has its ups and downs in terms of team interaction. Some times folks are on the ball, and the code flows. Other times, there’s what we call blockers… things that hold up progress.
A blocker could be any number of things.
A common blocker is missing or incomplete requirements. It’s hard for a programmer to teach a computer to do work if the programmer doesn’t know how to do that work.
Another common blocker is access or permissions. A programmer might have a requirement to develop code that depends on another service. If that’s the case, the programmer might be able to build an abstraction of that service, but eventually will need permission to access that service in some form to do integration testing.
In this case, I’ll still take the classic retrospective approach. I’ll address the following questions:
What did we set out to do?
How and why did we deviate from plan?
What worked?
What didn’t work?
What would we like to do next time?
How could we improve what worked?
How will we minimize the risks of what didn’t work?
How will we address any technical debt incurred?
Some retrospectives avoid management in order to avoid addressing these questions politically. In better situations, some representative of management and/or project stakeholders are included in order to get a more complete view of the project.
In my posts on the topic, it’ll just be my perspective, and I’ll be addressing the questions holistically. Having completed the projects and disbanded the delivery teams, with minor name changing to protect the innocent, politics need no longer apply.
I’ll admit, there’s some self-gratification in doing these post-mortems publicly. I’ll be able to show off experience, and hopefully grab the attention of folks who’d like to produce similar projects. My hope is to inspire more of the latter and spark up the conversation around how to apply similar solutions (preferably with the “desired state” improvements).
A lot of folks are familiar with Cortana, Microsoft’s AI bot that’s integrated into Windows 10. Cortana was inspired by a character from the video game franchise Halo. The Halo character known as Cortana was a “Smart AI”, an artificial persona, and asset of the game’s main protagonist. Apparently, there were others of her kind. Among them were Isabel, Mo Ye, Serena, and Iona.
I’ve been considering building a chatbot based on tools available to me as a bot developer. I’m not about to try building my own “Cortana”… but why not evolve a “proof of concept” that could even extend Cortana functionality…? and what better name to give it than something inspired by the same concept. I might even give Iona some of the features of the Halo character; most notably her alter ego that she uses to disguise herself as a much simpler bot. We’ll see how that pans out. For now, Iona is a much simpler bot…
I created this page partially because I wanted to publish the bot in Teams, and need a page describing the utility.
Early toying with Teams integration is going well so far:
I can’t believe I haven’t blogged this out, yet. I’ve been building chatbots for clients for years now, and presenting to the tech community on the topic at least as long.
For a while, I was doing “Bot in a Day” workshops, all over the country. Held at Microsoft Technology Centers in places like Boston, Reston, Philly… I just realized the last one I did was over a year ago, now, in Irvine, California.
The reason we don’t need to do the day long workshops anymore is because everything we did (and more!) in “Bot in a Day” can now be done reliably and repeatably in minutes… We do this using the latest iteration of the “Enterprise Template”, now known as the “Virtual Assistant Template”.
Ok, so the one-time setup can be a bit longer than an hour… but that’s (mostly) one time. If you are a C# dev, especially in the ASP.NET Core space, you probably have most of the tools installed already, anyway.
Anyway, I’ve been doing variations on this “Bot in an Hour” theme, using Virtual Assistant Template, all over New England, and will soon be taking it on the road to Washington DC, where I’ll be doing the shtick for Ignite the Tour in February 2020.
So the Virtual Assistant Template is a very quick way to build out some meaty bones of an enterprise-grade chat bot, especially in C# (though a TypeScript version is also available).
I won’t try to do what its own documentation does well, at this point. Rather, I’ll point you to that documentation.
Here’s my presentation slides from Boston Code Camp, which was on November 23rd, 2019. It’s more complete than the stripped down version I presented as a workshop at Global AI Bootcamp 2019 today (December 14th) at MIT.
In addition to the “Welcome to the Virtual Assistant Template” presentation for Ignite the Tour, I’ll be doing a similar presentation for Granite State NH .NET Devs on December 19th at the Microsoft Store in Salem, NH.
This is a programmer’s blog, in general. I often get down to bits & bytes of computer programming techniques. I want to take a moment to consider this, though, to help some of my less technically focused friends.
Unless your company is directly hawking tech related services or products, you might not think of yourself as a “tech” company. Even if you sell via digital marketplaces such as websites, you might not think of yourself as a “tech” company. Can you get away with rolling without considering technology in your strategy?
I get it. You probably don’t provide tools for computer programmers, or computer hardware builders. You have a savvy friend who helps you with your company web site, and another one who helps you decide what laptop model to give your top staff.
That said, you have competition that’s constantly re-inventing YOUR business model with technology. Even if a startup doesn’t come out of the woodwork to take a crack at your market, a goliath like Amazon is likely already stirring the pot. By cutting across every market in the known universe, Amazon may not technically be a monopoly, but they are using technology (and especially Artificial Intelligence) to scale and stack the odds against you.
What is a CTO? A Chief Technology Officer is a title and role usually associated with someone in the executive suite of a generally publicly traded company. That said, a CTO can be a role of a privately held company, as well.
What does a CTO do, exactly? The specifics of the roles differ from company to company… especially in larger enterprises that can afford to have these roles distributed over a larger team of individuals. The following is a general description of what a CTO does. In many larger companies, a CTO role may be split across a team, with teammates having titles like Chief Intellectual Property Officer, Chief Security Officer, Chief Information Officer, and such.
The bottom line for these roles is to do what any CxO role does; improve the overall value of the organization to its stakeholders, within some aspect of technology.
Chief Technology Officer
A CTO “chief technology officer” is responsible, at the topmost level, for some or all of following:
Business strategy from a technological viewpoint.
First and foremost, a CTO is responsible for making sure technology serves the business needs and strategies of the company.
This means keeping an eye on technologies, their benefits, how much they cost, how they play together with technologies already at play in your company. Where possible it may even mean being aware of high-level technologies at play in your competitor’s companies.
This also means understanding the total costs and return on investment. In the way that a home renovator should be making sure a renovation project adds to the resale value of your home, a CTO should be making sure technical aspects of business strategy add to the “resale” value of your business.
Is there an appliance that would do the trick?
Is there a service?
Which solution costs the most up front?
Which will have the best total cost of ownership?
Should we build our own solution?
Should we hire a contractor to build a solution for us?
Which will provide the best strategic / competitive advantage?
Is there an opportunity for Artificial Intelligence
Which cloud provider(s) have offer the best value for your needs?
Technical evangelism
This is about being the technical face of the company which can aid in partnership relations, cross-partnership automation, technical talent hiring, big-picture technical vendor selection, and mergers and acquisitions.
This also means helping other employees adopt new technology solutions, and train them to use them correctly.
Technical security
There is a balance between security/risk and convenience/reward that every technology solution needs to strike. The CTO is responsible for overseeing that balance related to technology.
Technical security is also about maintaining the trust of the people who use the solutions, that the solution is correct, does the needed job, and does so while limiting the company’s risks.
Technical project oversight
While a CTO isn’t a generally a project manager in the classic sense, the CTO does oversee the technical aspects of the project. This includes helping to guide project requirements, making sure implementers have the tools they need to get the job done, and making sure project decisions are practical in terms of budget and return on investment.
It’s also about making sure projects are built according to requirements and needs, and remain on strategy as they are being built.
Technical review of vendor and product selection falls in the area.
Technical debt management
No technical decision comes without some recurring cost. This recurring cost is typically a trade-off; the lower the up-front cost, the higher the technical debt tends to be. Sometimes technical debt is a subscription fee. Other times, technical debt can be in the form of a sub-process that must be done “manually” because it costs too much to automate.
As solutions age, they become less competitive, thus contributing to technical debt.
Integrations between systems are often contributors to technical debt. Proper analysis will show which integrations should be automated, and which can be left to “manual” workflows.
Technical team management
A CTO is often responsible for helping to identify and hire candidates for internal positions, including technical systems implementers and integrators.
A CTO needs to know when to delegate. Delegation may be in terms of individual tasks to teammates, or project management.
A CTO is not responsible for having all the answers on the tip of their tongue, but for knowing how to find appropriate expertise within a reasonable time frame as needed.
There’s a number of reasons why bringing a CTO on staff might seem difficult.
One way to get started is to consider bringing a consultant on board to serve in the role.
Such a consultant will
be able to work part time with you while demands are lower, potentially splitting their own needs across other clients like you, thus lowering your costs.
be focused on the guidance and success of your business strategies while working for you.
be willing to sign non disclosure and non-compete agreements.
be ready to help you find a permanent placement for the role as the role grows into the need.
perform all the functions of a classic CTO while in this capacity for you.
In conclusion, there’s a no reason any company of any size today should not think of themselves as a technology company. Further, as such, having a person in the role of a Chief Technology Officer is more than practical. In all but the smallest of businesses, it’s fully necessary.
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
OK, that’s not exactly true. I didn’t write the above block of text.
But this is a good place to do some explaining, so I’ll keep it, and go forward.
My world has been a little off the normal path lately.
My preferred state is heads-down hammering out solutions with teammates.
Instead, I’ve been back & forth between virtual assistant / chat bot projects, an Angular JS (on SharePoint!) project, a touch of Xamarin, and a hobby project built on Uno Platform.
In between, I’ve been trying to re-brand my users groups, get Granite State Code Camp rolling, and prep for Ignite, Ignite Tour, while attending & absorbing Mastery (Insight’s leadership conference, held in Arizona last week).
It’s these re-brandings and event org notes I’ll chat about here, but first, back to the “Hello World!”.
You may notice this site, (along with this blog) has changed URLs. Sponsorship funds for keeping the domain and site hosted by WordPress itself got low enough that I couldn’t justify the several hundred dollar renewal. As an MVP, I was able to request an Azure sponsorship, and it was granted. As a result, I spun up a WordPress instance in Azure. I’ll take responsibility for botching the domain name transfer, so this site is now www.granitestateusersgroups.net (instead of .org).
I apologize for the 404’s that will inevitably result.
Hopefully this will last a while. When this sponsorship dries up (as all sponsorships do at some point) I’ll be able to move it to wherever the cheese shows up when that time comes.
For now, I’m psyched to have it hosted directly on Azure, including the DNS management.
Now for the users group rebrandings.
The former Granite State Windows Platform App Developers (also known as #WPDevNH) is now Granite State-NH.NET . While the focus of the #WPDevNH group had been the Windows 10 API (also known an Universal Windows Platform), Microsoft has soft-abandoned UWP, in favor of the ACTUALLY “Universal” .NET Core. Starting with .NET Core 3 (in beta now, GA due in a matter of weeks), the things that made UWP awesome will be baked into .NET Core. When .NET Core 3 gets updated to “.NET 5”, the platform will be almost as “universal” as Java, without the… JVM.
We have what feels like a string of conferences to review for the next several meetups. Xamarin Conf just occurred with a review due for it. Uno Conference is coming mid September, and we’ll do a review of that, as well. October will have the Microsoft-hosted .NET Conference. and November will have Microsoft Ignite. All of these are review worthy, so that will take us right into the holidays, believe it or not!
Likewise, the former Granite State SharePoint Users Group (#NHSPUG) will tentatively be known as the Granite State Microsoft 365 Users Group. This comes with a leadership change for the group. Julie Turner stepped down, and handed reigns over to Derek Cash-Peterson. Julie and Derek are both former BlueMetal teammates of mine, now both with Sympraxis Consulting. I will continue to work with Derek on this group, as well as with Derek, Julie, Marc Anderson (also of Sympraxis), and others on SharePoint Saturday New England 2019. This Granite State Microsoft 365 Users Group feels a little over due. Our sister groups in the Boston area flipped branding to “Office 365” several years ago. We kinda skipped that trend when Microsoft lumped Windows into the offering and dubbed it “Microsoft 365”. In a sense, both the users groups I directly support will indirectly support Windows usage & development, now. I say “tentatively”, mostly cause I don’t recall if the last conversation I had with Derek about it counted as “official”.
Both these re-brandings bring the two users groups much more in line with our patron sponsor’s (Microsoft’s) exciting plans for the future. I’m psyched to help usher in these changes.
Now that this website is settling in on Azure, we’ve called the kickoff meeting for the organization of the Granite State Code Camp, already on the calendar for Saturday, November 2nd at Manchester Community College. Please stay tuned for the major milestones to be announced:
Call For Speakers
Call For Sponsors
Attendee Registration
All of those should be coming very shortly.
I’ll get the rest of this site updated with 2019 information soon. Let me know what I’m forgetting & Stay tuned!
This looks like a fantastic lineup of events being presented in the week of May 16th-23rd, generously hosted by our friends at the Microsoft Store in Salem, NH. (I’m not involved, nor is Granite State Users Groups… I’m just posting to help get the word out for a worthy cause.)
Microsoft at The Mall at Rockingham Park | Ability Week 2019
Free accessibility resources
Join Ability Week’s free workshops for insights and tools to support people with hearing, vision, mobility, and learning disabilities.
Make
your Business More Accessible
One
in five people live with an accessibility issue.
This
free workshop shows you how no-cost tools in Windows 10 and Office365 can make
your business more accessible for both employees and your customers.
In
one hour, you’ll learn:
how to create an inclusive culture and hiring process
how to build accessible materials
what ongoing accessibility resources and tools are
available
Be
empowered by technology with Microsoft accessibility tools
In this free, one-hour
workshop, you’ll learn accessibility features in Windows 10 and Office 365
relevant to your life. You will leave empowered to communicate and experience
the world through these tools. During this workshop, you will:
Learn
how Microsoft’s assistive technology can empower how you communicate, learn,
and experience the world.
Explore
accessibility tools and features of Windows 10 and Office 365.
Discover relevant Microsoft resources and ways to continue
your learning after the workshop.
Explore
inclusive technologies for people on the autism spectrum with Windows 10 and
Office 365
This
free workshop shows you how to activate and use accessible and inclusive
features built into Windows 10 and Office 365 that may support people on the
autism spectrum. In just one hour, participants learn:
Tech built for different learning styles and abilities
Microsoft Learning Tools
Resources to use accessibility features from Microsoft
Empowering
students affected by Dyslexia with Windows 10 and Office 365
Are
you looking for more tools to support your students or child who may need a
boost in reading comprehension and confidence, including those affected by
dyslexia? Would you like to learn how to access and use the accessibility
features built into Windows10 and Office 365? Please join us at the Microsoft
store for a free, informative, and hands-on workshop introducing educators and
parents or caregivers to the Microsoft Learning Tools that implement proven
techniques to improve reading and writing for people regardless of their age or
ability.
Participants
will:
Explore tools to empower different learning styles and
abilities, and tools to support students with disabilities.
Get hands-on experience with Microsoft applications and
tools including Learning Tools, the Ease of Access menu, and accessibility and
productivity features of Office 365.
Gain resources to continue to explore Learning Tools and
accessibility tools and features.
Harry
Potter Kano Coding Kit Workshop ages 8+ Autism friendly event
This free autism-friendly workshop introduces students eight and
up to foundational coding concepts through the Harry Potter Kano Coding Kit
wand, drag-and-drop coding, and Harry Potter spell motions, creatures, and
artefacts. Alternate activities allow a broad level of participation, and
parents are welcome to join with their child.
The parent, legal guardian, or authorized adult
caregiver of every workshop participant under 17 years of age must sign a
Participation Agreement upon arrival and remain in Microsoft Store for the
duration of the event.
As many folks in my community may already be aware, I’ve been building chatbots with my team, using the Microsoft Bot Framework, a lot lately. In doing so, we’ve encountered a common issue across multiple clients.
While many peolpe are worrying about lofty issues around artificial intelligence like security, privacy, and ethics (all worthy to be sure), I’m considering something more pragmatic here. Folks go into a cognitive agent build without considering content, how it relates to AI and AI development, and how to manage it. While some of my clients with more mature projects have taken a crack at resolving this issue with custom solutions, these custom solutions are often resource intensive, fail to consider all the business requriements, and end up becoming an unnecessary bottleneck to further development. Worse, waiting till a project phase-2 or phase-3 to address it compounds the trouble.
Sadly, there’s often an enterprise content management system (ECMS) in place that could be used, instead, right from pre phase-1. With a reasonable effort, a well-featured existing ECMS can be customized along side your build out, saving a massive effort later.
The Backstory
If you check out that Microsoft Bot Framework website, one of the first things you’ll notice is that building conversational agents is a process that cuts across a number of development disciplines… and the first one that typically gets highlighted is Artificial Intelligence.
Artificial Intelligence around conversational agents could include anything from visual identification & classification to moderation, sentiment analysis, and advanced search, but it predominantly revolves around language tools… especially LUIS, QnA Maker, Azure Search, and others.
At this point, it helps to think about what Artificial Intelligence is. Artificial Intelligince is about experience. In a conversational artificial intelligence, that content is human readable, social, and web like. Experience is content…. Conversational content.
In fact, it’s almost web like. A user typically opens a chat window (which correlates a bit to a browser) and types an utterance (query). The bot catches such utterances, and depending on a number of factors of origination, data state / context, identity, and authorizations, generally produces a text based response.
Getting More Specific
In the case of a bot designed to coach folks with a chronic disease, for example, a user might ask a bot “Can I eat chocolate cake?”. The bot gets this query, and parses it into language elements… which looks something like “can I eat” as an ‘intent’, and “chocolate cake” as an entity. The bot then brings in a rules-set described by conditions it knows about the user (what disease(s) are being managed), and what the bot knows about the users current state (perhaps the blood glucose level if they’re diabetic). Based on the conditions against the rules, the response must be produced. If you have a sophistocated bot, you might have a per-entity response… Take a response like “OH, chocolate cake is wonderful, but your blood sugar level is a bit high right now. Unless you can find something in a low-carb, sugar free variety, I wouldn’t recommend it, but here’s a recipe you might try instead.” That content (including the suggested recipe) must be authored by subject matter experts, moderated by peers, approved (potentially by regulatory and maybe even legal teams), tagged to match the rules engine expectations… much like web content.
Also note, the rules engine itself is also content in a sense. In order to let subject matter experts have a say in tweaking and tuning responses, (what’s a high blood glucose level? What’s too much sugar for a high blood glucose level, et al.) These rules should be expressed as content a subject matter expert could understand and update.
Another common scenario we’re seeing is HR content. Imagine you’ve got a company that produced an HR handbook every year. Well, actually, you’re a conglomerate that has a couple dozen handbooks, and each employee needs answers specific to the one for their division… Not only do you have to tag the contet by year, but by division, and even problem domain. (Imagine trying to answer the question “what is my deductible?” It’s easy enough for a bot to understand that this relates to insurance provided through benefits. The answer might be different depending on whether you mean the PPO or the PMO medical plan… or is that a dental plan question? What about vision? They probably mean this year. Depending on the division they’re a part of (probably indicated by a claim in their authorization token), they might have different providers, as well.
Back to Development
In the development world, not only do you have the problem domain complexities present, but you also have different environments to push content to… the Dev environment is the sandbox a coder works in actively, and it’s only as stable as the developer’s last compile. Then there might be environments named things like DIT, SIT, UAT, QA, and PROD. To do things right, you should update content in each of these environments discretely… updating content in QA should not affect content in SIT, UAT, or PROD.
Information Architecture
Information architecture (IA) is the structural design of shared information environments; the art and science of organizing and labelling websites, intranets, online communities and software to support usability and findability; and an emerging community of practice focused on bringing principles of design, architecture and information science to the digital landscape.[1] Typically, it involves a model or concept of information that is used and applied to activities which require explicit details of complex information systems. These activities include library systems and database development.
We’ll add artificial intelligence cognitive models and knowledge bases, especially for conversational AI, to that definition. Note that some AI applications need big data solutions. Most ECMS products are not big data solutions.
Enterprise Content Management Systems
There’s a lot of Enterprise Content Management Systems out there, many of which would be suitable for the task of handling the needs of most conversational AI content management systems.
My career path and community involvement causes me to lean toward SharePoint. If you break down the feature set, it makes sense.
Ability for SMEs to manage experience data easily without lots of training to understand create/read/update/delete (CRUD) operations
Ability to customize content type structures
Ability to concurrently manage individual experience data items
Ability to globalize the content (to support multiple languages)
Ability to customize workflows (think SME review approval, regulatory, even legal approval) on a per-experience item basis
Ability to mark up each experience item with additional metadata both for cognitive processing purposes and for deployment purposes
All of this content is then exposed to REST services, so you get the ability to integrate automation to bridge the content into the cognitive models
It’s often said that if you design your data structures properly, the rest of your application will practically build itself. This is no exception. While you will have to build your own automation to bridge the gap between your CMS and your cognitive model environments, you’ll be able to do this easily using REST services.
While you may need to come up with your own granularity, you’ll probably find some clear hits, especially in the area of QnA Maker… every Question / Answer experience pair probably fits nicely as a single content entity. You’ll probably have to add metadata to support QnA maker’s filtering, and the like.
Likewise with LUIS, you may find that each Intent and the related utterances is a single content entity. LUIS, being more sophistocated, will also need related entities and synonyms modeled in content data.
I’ve seen other CMS system used. Most notably CosmosDB and Contentful. Another choice might be some kind of data mart. All of these cases require a heavy investment in building out a UI layer for your SMEs. SharePoint takes care of the bulk of that part for you.
Got a project you want to start working on? Don’t forget to account for content management early on. As always, reach out to me if you need advice on this or any other aspect of building out a solution involving technologies like these… Connect on Twitter, Linkedin or the like…
This showed up in the mail today! Despite the April 1st date, it’s not an April Fools’ gag after all! I’ve only ever seen one of these trophies in person before this one. I’ve been trying to stay chill about it…. but heck, here it is…
To understand this perspective, we’ll need to walk through some key terms….
What is Silverlight?
For those who don’t know, about ten years ago, Silverlight was the way to write C# and XAML to run in the web browser. It required a plug-in to run, much like Adobe Flash Player. Unfortunately, Microsoft announced the…. untimely demise of Silverlight in 2012. Silverlight, to some extent, seemed a more catchy term than other related technology names, so Microsoft used Silverlight as the name for mobile platforms that are also now depricated. As a result, it became almost synonymous with XAML.
What is XAML?
XAML, “eXtensible Application Markup Language” is the markup language behind a few great UI / UX layers in various Microsoft .NET-oid languages. For those who’ve used it, it’s an addictively cool language family. Using Visual Studio, Blend, and Adobe DX, you can create first-class UI. With features like Storyboard animation, basic animation becomes child’s play. Composition makes fast, dynamic animations easy. Once you’ve gotten the basic idea of it, one finds themselves wanting to use it anywhere they can… or at least that’s been my experience through WPF, Silverlight, Silverlight for Windows Phone, Silverlight for Windows Phone 8 / 8.1, Universal Windows Platform (UWP) and probably others.
The “code behind” XAML is typically C#, and historically .NET based.
What is Universal Windows Platform (UWP)?
UWP is the native platform of Windows 10. It’s similar to classic .NET in a few ways. First, UWP feels a lot like Windows Presentation Foundation (WPF) and .NET, being XAML and C# based, respectively. It differs from classic .NET because it has a lot of fixes, both in terms of security and performance, that .NET can’t afford to apply for various reasons. More simply put, .NET had some serious technical debt built up, so the easiest way to forgive that debt was to build a new platform based on the old languages. Your XAML and C# skills are the same, but the namespaces and supporting framework libraries are different.
Don’t fret, though… UWP runs natively on over 800 million devices (as of today, December 22nd, 2018), and that number continues to grow. UWP is the native platform for all Windows 10 devices. This means desktops, laptops, tablets, phones, HoloLenses, Xbox consoles, IoT embedded devices, and more.
What is WebAssembly?
WebAssembly is a relatively new bytecode language specification… a virtual machine specification, similar to the Java Virtual Machine (JVM), that is fully supported by most modern major web browsers. It allows near native performance in the same sandbox that javascript apps run in. When you run javascript in a web page, the jit compiler in the browser converts the code into tokenized bytecode in order to execute it quicker. WebAssembly improves on this significantly by pre-compiling the code. Because the code is pre-complied, it doesn’t have to be sourced from javascript. It can be compiled from just about any programming language. Wasm, as it’s called, went from a specification just a few short years ago to being well supported in all major modern web browsers.
What is Uno Platform?
Uno Platform, for our purposes, is not really a new platform, but an extension to UWP.
You write your UWP application for your Windows 10 devices the same way you always have. Uno provides a mechanism to re-compile that UWP app to Web Assembly (and… by the way… using Xamarin tools, also to iOS… and also to Android!)
In a sense, Uno Platform is to UWP as Xamarin is (roughly) to classic .NET.
See the connection?
Let’s do some math…
UWP = C# & XAML for Windows 10. (800,000,000 devices)
Uno Platform += UWP for iOS (Millions more devices), Android (over a Billion devices), and WebAssembly (every modern major PC in the world)
Now factor in this…
.NET Core 3 += UWP for services
What does all that add up to?
One skill set…
UWP (C# & XAML) = FULL STACK, on all major platforms
From data access layer to REST API to UI canvas.
Wait a minute… What about Xamarin?
Xamarin is the older way to do C# for cross platform / mobile.
Coincidentally, just this past Thursday, Carl Barton, a Microsoft MVP for Xamarin presented the Xamarin Forms Challenge at the Windows Platform App Devs users group. The goal of the meetup was to demonstrate creating a simple app in C# and running it on as many platforms as we could in the hour. He easily pushed ran the app on over a dozen platforms in the hour.
Uno Platform actually depends on Xamarin libraries to support iOS and Android.
The main differences between Xamarin and Uno Platform are these:
Xamarin encourages you to use a Xamarin-specific dialect of XAML, including Xamarin Forms to express your cross platform UI.
If you already know & understand Microsoft’s UWP dialect of XAML, Uno Platform uses that dialect.
Xamarin enables you to produce binaries for dozens of different target platforms, reaching a billion or more devices. These include .NET, UWP, iOS, Android, Tizen, Unity, ASP.NET, and many others.
Uno Platform only enables you to reach three additional binary output targets… iOS, Android, and WebAssembly…. but WebAssembly can or likely will soon cover most of what Xamarin Forms covers.
I’ll leave it up to you which to choose, but for me, given the choice between Xamarin with several years of technical debt built up in a distinct dialect of XAML, and Uno Platform, using the fresher, native UWP dialect of XAML…
Finally…
Here’s the slides I presented most recently at the New England Microsoft Developers meetup in Burlington, Mass on December 6th (thanks again to Mathieu Filion of nventive for much of the content):
I ran across this article from Forbes on LinkedIn. It’s an interesting bit about how Kroger is reacting to the threat that Amazon/Whole Foods suddenly represents in its market segment.
The Amazon/Whole Foods merger represents a heavily modernized re-make of a traditional business, and it is expected to put grave pressure on the rest of the grocery segment.
If your market segment isn’t feeling this kind of pressure already, you likely will be soon.
Your business has only a couple of choices when it comes to modernization.
React to the pressure that your market segment is under already.
Begin preemptively, and be the pressure the rest of your market segment feels going forward.
I remember the days of building “nextgen” software. That model has scoped up a few times, to vNext services, to next gen infrastructure / cloud, to vNext IT division.
Either way, it’s time to start developing your company’s “nextgen enterprise” strategy.