- People might second guess
your solution, potentially identifying a flawed design more quickly.
Similarly, by clearly explaining your solution or approach, you might
surface new insights yourself (a.k.a. the cardboard programmer). At the
same time, you might get more buy-in from your team.
- It increases trust and
respect the people have for you. They won't only see you as the
fix-it-all-guy, but also as the go-to-guy for advice. In a way, it makes
you approachable. Especially if you're filling a high-profile role, being
approachable by new people or people without strong communication skills
is essential for an open and efficient work environment.
- Seeing your colleagues
solving the problems with a bit of help can increase the trust you have
for them. And if you trust the people you're working with, you’ll also
more easily delegate responsibility to them. I'm pretty sure that will
make your live much easier.
- Gives people more autonomy
and allows them to learn from their mistakes, which will significantly
increase the capacity of those people. In retrospect, the single biggest
mistake I made in my career is to try to keep people from making mistakes.
It has cost me a lot of energy, and never gave them a chance to learn and
- If you teach somebody a new
shortcut key, a debugging trick or a convenient command-line tip as part
of a solution, chances are that that person will cascade that knowledge on
to other colleagues, much faster than you do alone.
- If people feel the solution
is theirs, they usually also feel more responsible for it, automagically
increasing the commitment they'll give to it. At the same time,
successfully solving a problem will increase their security level and
increase the energy they will take up the next challenge.
- Being the one with all that knowledge and skills may put you in a powerful situation for a while, but at some point, you simply won't be able to handle all that work anymore. Being able to distribute the work to others so that you can take a couple of days off to spend some time with your family or attend that awesome conference will quickly become a difficult or impossible thing to do.
Thursday, May 28, 2015
Saturday, May 16, 2015
In a recent post, I concluded that I have a strong tendency towards tactical architecture. From that perspective, I try to avoid big-design-upfront. I have a built-in allergy for rebuilding stuff that is already out there. I would never consider building my own message bus or event sourcing framework for instance. I also react pretty strongly when people are suggesting things like that and believe that this is a common source for project failures. I know that I myself can be way too optimistic about any development work I'm starting, regardless of my pessimistic eye for roadblocks and risks. So if an experienced developer claims some big design thing is going to be pretty easy or obvious, they'll have a very hard time convincing me.
But there are two other factors to include in such a tradeoff. First of all, I do know that occasionally some level of strategic architecture is needed. You can't just build a large scale distributed system without doing at least a decent amount of upfront design. But even then I'd like to keep an eye on the reversibility of such a design decision. In other words, if we can postpone a design decision without having to undo or redo a lot of work later on, then I would postpone the decision. If we can't, than yes, a much more elaborate design has to be done first.
The other factor is about the commitment your fellow developers will towards a design. Here, with commitment, I mean how responsible they feel for a certain component or design choice. This is a question I've been pondering on for years and has been, and still is, one of the most difficult tasks to cope with. It's ironic how I decided to go pursuit a technical career where my biggest challenges are people related. Over the years, I've learned that ultimately a person feels most responsible for the choices they made themselves. A theory that Christopher Avery's stages of responsibility illustrates well. On a larger scale, where individual choices are becoming less relevant, you can use techniques as covered by the Culture Engine to get people to commit to certain organizational choices.
All in all, you can see how this tradeoff has some pretty important subtleties. So what happens when you get eye-to-eye with another developer that has contrasting ideas on how to approach software architecture? I have not finalized my conclusion yet. But right now I'm in a similar situation where I've given the people around me a lot more room to do what they think is best than I usually do. I was reluctant about some of their ideas, and I still have some reservations, but I do see a lot of commitment for the decisions they have made. By now, they’ve realized building some of the things they are building isn't as trivial as they thought. Each day we run into more little obstacles, but they keep doing what it takes to make it a success. Ultimately however, we'll have to see if that's enough and whether or not my suspicions were correct.
So what do you do to get the developers in your teams to feel responsible for the things they are working on? Let me know by commenting below or tweeting me at @ddoomen.
Monday, May 04, 2015
The beauty of attending conferences is not just about hearing the latest and greatest (which you can read on the internet anyway), but it's the time away from the daily job that allows you to really immerse yourself in new information. If you can, I recommend attending a conference in another time-zone so that the changes that you're checking your email or other collaboration tools are minimized. And it doesn't matter whether you're catching up with some technology or practice you've been ignoring, or just trying to pick-up some nice ideas that will help you during your next design challenge, it's all worth it.
So next to all the cool stuff about Microsoft's new universal platform and the cross-platform mentality that we've been seeing throughout the Build 2015, I've obviously attended numerous sessions not directly related to that. Not all were great. Heck, some were really bad, which is not to say about Scott Hanselman's sessions since he is by far the best speaker in the entire .NET ecosphere. Regardless, I collected a couple of takeaways that you might be interested as well.
Since my current client is using Amazon AWS, I've been neglecting Azure for the most part although I know that the number of features have increased considerably since Scott Guthrie took charge. This week, that same Scott claimed that there are now more data centers than Amazon and Google combined and they're running 1 million servers now. Irrespective of the truthfulness of those figures, the feature set of Azure is stunning. I think you can safely conclude that Amazon services are mostly about infrastructure whereas Microsoft is trying to solve real business problems.
Mark Russinovich (the co-author of Windows Internals) showed the Azure Resource Manager and how to build templates for complex deployments consisting of many resources (web sites, databases, networks, security elements) and how to parametrize them. I'm not sure how this relates to PowerShell DSC and how it compares to its open-source equivalents as Vagrant and Puppet. But after this session, I came to realize how little I know about Azure.
During my trip to QCon last year, I already concluded that the operational side of micro-services is going to be the biggest hurdle. And as expected, several companies tried to fill that gap, including Microsoft. Service Fabric is their micro-services hosting platform which supports both stateless as well as state full micro-services. The former would use a relational or NoSQL database for persisting state, whereas the latter can use something that Microsoft calls reliable collections and dictionaries. This reduces complexity by not needing any persistence mechanism. Service Fabric also supports a third type of services though Microsoft's own actor framework. It was used in all Service Fabric demos, so obviously it's still being developed actively.
Scott Hanselman demonstrated another Azure service called Azure API App Services. In essence, you're just looking at WebAPI controllers hosted in Azure, but enhanced with rich telemetry support and very sophisticated operational dashboards. Migrating existing WebAPI controllers to Azure API Apps appeared to be nothing more than adding a couple of NuGet packages. And if those services use Swagger, you'll make your DevOps teams very happy.
Swagger and Swashbuckle are fantastic for adding API documentation to your HTTP services. It's used by Microsoft internally to create the Azure SDK. Swagger can even serve as a kind of WSDL for those services. Both Azure API and the new Logic Apps rely on that heavily. You can do yourself too, for instance by using AutoRest to generate code from a swagger-enabled rest API that hides the HTTP ugliness. And if your services run in the cloud, you can use RunScope to debug HTTP APIs for debugging cloud-based web services.
Related to that, Azure Applicatoin Insights can do something similar on a much broader scale. Just add a NuGet package to your web site, mobile app or Windows application and access the diagnostics environment on Azure. The application itself doesn't even have to be hosted on Azure. It looked so impressive that I believe this can rival with AppDynamics or New Relics products. The speaker mentioned that you can also use it to monitor iOS and Android apps, but I missed the explanation on how that would work.
On Visual Studio 2015
- GitHub has been working with Microsoft to better integrate Github in the development experience. The first enhances the Git Team Explorer with GitHub-specific features such as two-factor authentication and pull request. The second add-on allows you to search directly for code on Github from within the Developer Assistant.
- Web Essentials, Mads Kristensen's pet project, is receiving support for ReactJS, a long term request.
- Visual Studio 2015 RC has built-in support for JSON Schema, is much faster than the CTPs and now allows you to edit-and-continue while debugging.
- CodeLens, a feature only available in the Ultimate versions of Visual Studio 2013 is now available starting at Visual Studio 2015 Professional.
- The performance analysis tooling in Visual Studio 2015 is pretty impressive as well. E.g. PerfTips give you the execution time of statements while your debugging your code. It tries to ignore debugger overhead as much as possible and is available in every edition of Visual Studio 2015. It also supports memory snapshotting and comparisons while debugging and provides many tool for diagnosing network usage and WPF/XAML rendering bottlenecks.
- Talking about embracing Android and Apple, there's now a Windows Phone emulator for MacOS. Go figure that.
Other random things
- AutoPoco is a nice tool to generate test data based on conventions and a fluent API to easily create semi-random or batches of similar objects.
- ManifoldJS can generate container apps from a mobile web app for all major platforms using a single command-line.
Friday, May 01, 2015
Let's be honest here. Since Microsoft has been shipping technical previews of Windows 10, I've been repaving my laptop a couple of times and then reverted to Windows 8.1. I didn't like the intermediate touch experience and stability was also not its greatest strengths. In fact, I reverted to 8.1 just before Build because the preview drained my Yoga's battery way too fast. All in all, this has been the most disappointing experience with a pre-release of a new Windows version ever.
However, at Build 2015, Microsoft has demonstrated that their claims for having one platform for all devices were really warranted. In the light of this blog's theme, they've been continuously improving that experience since then, heavily relying on the feedback from the people registered for the Windows Insider program. What they've shown during the keynotes looks absolutely stunning. I'm looking forward to upgrade to that latest build as soon as I'm back home.
So what made my engineering heart tick a bit faster? Let's start with this nice picture:
As I understand it, the goal of the Universal Windows Platform is twofold:
- To be able to run Windows 10 applications on the broadest set of devices. Windows 10 will run on laptops, tablets, Microsoft/Nokia phones, Xbox One, Raspberry Pi 2 and even the Microsoft HoloLens.
- To provide deep integration with Windows 10 features such as Cortana, notification and Continuum
With respect to the first point, IMHO one of the coolest things they are doing is using the same core operating system between phones and desktops. To prove that that is really true, they connected a keyboard, mouse and display to the phone. I was expecting to see the Windows Phone home screen enlarged to fit the screen. Instead, we got a real Windows 10 desktop with a start button that looks like the Windows Phone home screen extended with extra options and settings. And that's not all, the presenter at some point opened a .pptx file from an email and launched a real version of PowerPoint. This is what Continuum is about; dynamically switching between form factors and adapting the user experience accordingly.
With respect to the second point, Microsoft went to great length to allow deep integration with Windows 10. Since she doesn't speak Dutch yet, I personally don't care too much about Cortana. However, I do see the potential of deep integration. We're way beyond the point of setting an alarm clock using a voice commands. So to make it as easier to get access to applications as easy as possible and keep the install/uninstall procedure clean, they’ve increased the importance of the Windows Store. For instance, you can now distribute traditional WIN32 and full .NET framework based applications through the store. At the same time, they allow you to wrap a traditional website in a container, enrich it with Cortana and notification support and distribute it through the Store as well.
The same goes for the browser experience. Microsoft's new browser Edge has been rebuild from the ground up, obviously with a fast internal engine, rich Windows 10 integration and support for add-ons. In fact, Microsoft made a particularly easy to convert Firefox and Chrome add-ons since they use the same industry standard for browser plug-ins.
And even Microsoft's HoloLens is running Windows 10. During the 2nd day's keynote, they demonstrated building a Windows 10 app and then augmented it with HoloLens specific features. How cool is that? And if you're skeptical about the truthfulness of the marketing material, checkout this review.
All in all, the sessions here at Build managed to convince me Windows 10 is much more about a universal platform for building cross-device applications than just a new version of a desktop operating system. Together with the new conversion tooling to compile Android and iOS apps and games to Windows 10, and the fact that all upgrades within the first year are free, Microsoft has a big change for success.