<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Greg Shackles]]></title><description><![CDATA[Greg Shackles]]></description><link>https://gregshackles.com/</link><generator>Ghost 4.32</generator><lastBuildDate>Fri, 03 Apr 2026 21:56:59 GMT</lastBuildDate><atom:link href="https://gregshackles.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Analyzing .NET Dependencies with Neo4j]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Recently I was doing some planning work for one of our larger repositories to determine how we might approach splitting it up, and wanted to start asking a lot of questions about the project dependencies within it. There are various great tools out there like <a href="https://www.ndepend.com/">NDepend</a> to help analyze complexity</p>]]></description><link>https://gregshackles.com/analyzing-net-dependencies-with-neo4j/</link><guid isPermaLink="false">61ce48a0437e8200017d4164</guid><category><![CDATA[F#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Neo4j]]></category><category><![CDATA[Xamarin]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Mon, 30 Dec 2019 22:22:11 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Recently I was doing some planning work for one of our larger repositories to determine how we might approach splitting it up, and wanted to start asking a lot of questions about the project dependencies within it. There are various great tools out there like <a href="https://www.ndepend.com/">NDepend</a> to help analyze complexity and dependencies, but I found myself wanting to really query the data in a lot of different ways, as well as inject it with knowledge we had about our projects such as which ones were part of the deployable artifacts, etc.</p>
<p>Since dependencies are naturally represented as graphs, particularly since they can be nested several levels through chains of dependencies, I figured I&apos;d see if I could easily get the data into a Neo4j database and start querying it that way. It ended up being very easy and worked great, so I thought I&apos;d share a quick version of what I hacked together since it was a fun and useful experiment. For this example I&apos;ll use the <a href="https://github.com/xamarin/Xamarin.Forms">Xamarin.Forms</a> repository, since it contains a number of different projects and dependencies within it.</p>
<h1 id="loadingthedata">Loading the Data</h1>
<h2 id="generatingthecsvs">Generating the CSVs</h2>
<p>Neo4j makes it nice and easy to import data via CSV files, so that&apos;s what I decided to go with. First, choose this option to locate the import folder for your database:</p>
<p><img src="https://gregshackles.com/content/images/2019/12/neo4j-importfolder.png" alt="neo4j-importfolder" loading="lazy"></p>
<p>Make note of the path of that folder, since we&apos;ll need to plug that into the next step. Next I reached for F#, my favorite scripting language for stuff like this, and started writing up a quick FSX script to find all <code>csproj</code> files in the repository and parse out their project references. My first pass at this used the XML type provider, but I ran into some parsing issues with it on some project files, and ultimately just dropping down to <code>System.Xml</code> was concise enough that I just stuck with that.</p>
<p>First, some functions for parsing project files and writing out the resulting CSV files:</p>
<pre><code class="language-fsharp">open System.IO
open System.Xml

let getProjectReferences (path:string) =
  let doc = XmlDocument()
  doc.Load(path)

  doc.GetElementsByTagName &quot;ProjectReference&quot;
  |&gt; Seq.cast&lt;XmlNode&gt;
  |&gt; Seq.map (fun node -&gt; 
    Path.GetFileNameWithoutExtension node.Attributes.[&quot;Include&quot;].Value)

let repoPath = @&quot;C:\code\github\xamarin\Xamarin.Forms&quot;
let neoImportPath = @&quot;&lt;your import path here&gt;&quot;

let writeFile name lines =
  File.WriteAllLines(Path.Combine(neoImportPath, name), Array.ofSeq lines)
</code></pre>
<p>This also makes the assumption that there is only one project in the repository with a given name, as a means of making things more readable by stripping off <code>.csproj</code> from the file name.</p>
<p>Next, we&apos;ll read all <code>csproj</code> files and create a map of their project dependencies:</p>
<pre><code class="language-fsharp">let allDependencies =
  Directory.EnumerateFiles(repoPath, &quot;*.csproj&quot;, SearchOption.AllDirectories)
  |&gt; Seq.map (fun path -&gt; 
    (Path.GetFileNameWithoutExtension path), (getProjectReferences path))
</code></pre>
<p>That&apos;s all the data we need, so now we just need to write out those CSV files. First, the list of projects:</p>
<pre><code class="language-fsharp">allDependencies
|&gt; Seq.map (fun (project, _) -&gt; sprintf @&quot;&quot;&quot;%s&quot;&quot;&quot; project)
|&gt; writeFile &quot;projects.csv&quot;
</code></pre>
<p>And then the dependencies:</p>
<pre><code class="language-fsharp">allDependencies
|&gt; Seq.filter (fun (_, projectDependencies) -&gt; not &lt;| (Seq.isEmpty projectDependencies))
|&gt; Seq.collect (fun (project, projectDependencies) -&gt;
    projectDependencies
    |&gt; Seq.map(fun dependency -&gt; sprintf @&quot;&quot;&quot;%s&quot;&quot;,&quot;&quot;%s&quot;&quot;&quot; project dependency))
|&gt; writeFile &quot;dependencies.csv&quot;
</code></pre>
<h2 id="importingthecsvs">Importing the CSVs</h2>
<p>Now that those are generated, we just need to import those into the database using a bit of Cypher. First we&apos;ll do the projects:</p>
<pre><code class="language-cypher">LOAD CSV FROM &apos;file:///projects.csv&apos; AS row
WITH toString(row[0]) AS name
CREATE (p:Project {name: name})
</code></pre>
<p>That will parse out each row in the CSV file and create <code>Project</code> nodes for each of them, assigning the <code>name</code> property based on the value. Next we&apos;ll load up the dependencies, matching them against the project nodes we just created, and creating a <code>DEPENDS_ON</code> relationship between each of them:</p>
<pre><code class="language-cypher">LOAD CSV FROM &apos;file:///dependencies.csv&apos; AS row
WITH toString(row[0]) AS dependent, toString(row[1]) AS dependency
MATCH (dependentProject:Project {name: dependent})
MATCH (dependencyProject:Project {name: dependency})
MERGE (dependentProject)-[rel:DEPENDS_ON]-&gt;(dependencyProject)
RETURN count(rel)
</code></pre>
<p>You can see here that the <code>DEPENDS_ON</code> relationship also indicates the direction of that dependency. Similar to properties on project nodes, if we wanted we could also add properties to the relationships as well, so a future version of this could also include things like package dependencies as well, and indicate the type of dependency as a property on that relationship.</p>
<p>Now we&apos;ve got all our projects and dependencies loaded into Neo4j and ready to query!</p>
<h1 id="queryingthedata">Querying the Data</h1>
<p>Let&apos;s start simple and query out all the projects and their dependencies and visualize it, using the following query:</p>
<pre><code class="language-cypher">MATCH (p:Project) RETURN p
</code></pre>
<p>This ends up looking like:<br>
<img src="https://gregshackles.com/content/images/2019/12/neo4j-graph.png" alt="neo4j-graph" loading="lazy"></p>
<p>Ok, so that alone doesn&apos;t end up being super useful since there&apos;s a lot going on, but it still says a lot! The <code>Xamarin</code> prefix makes it a little hard to read in this form as well, but clicking through on that center node shows that it&apos;s actually <code>Xamarin.Forms.Core</code> which is clearly one of the primary dependencies within this repository.</p>
<p>The visualization side of the graph data is cool, but let&apos;s check out some of the types of queries we can easily write based on having this data loaded into a graph database. For example, which projects have the most direct dependencies?</p>
<pre><code class="language-cypher">MATCH (dependent:Project)-[DEPENDS_ON]-&gt;(dependency:Project)
RETURN dependency.name, COUNT(dependent.name) AS numDirectDependents
ORDER BY numDirectDependents DESC
</code></pre>
<p><img src="https://gregshackles.com/content/images/2019/12/neo4j-directdependencies.png" alt="neo4j-directdependencies" loading="lazy"></p>
<p>One of the nice things about Cypher is how readable these relationship queries end up being, since the syntax includes the visual representation of them. Those are direct depedencies, but what if we wanted to extent that to include indirect ones as well? All we need to do is add a <code>*</code> into the relationship part of that query and Neo4j takes care of the rest:</p>
<pre><code class="language-cypher">MATCH (dependent:Project)-[DEPENDS_ON*]-&gt;(dependency:Project)
RETURN dependency.name, COUNT(DISTINCT dependent.name) AS numIndirectDependents
ORDER BY numIndirectDependents DESC
</code></pre>
<p><img src="https://gregshackles.com/content/images/2019/12/neo4j-indirectdependencies.png" alt="neo4j-indirectdependencies" loading="lazy"></p>
<p>We can see that <code>Xamarin.Forms.Core</code> is clearly one of the primary dependencies, but what percentage of projects actually depend on it?</p>
<pre><code class="language-cypher">MATCH (dependent:Project)
OPTIONAL MATCH (dependent)-[:DEPENDS_ON*]-&gt;(dependency:Project {name: &quot;Xamarin.Forms.Core&quot;})
WITH DISTINCT dependent.name as dependentName, 
	CASE dependency.name 
    	WHEN NULL THEN false 
      ELSE true 
	END AS dependsOnCore
RETURN dependsOnCore, COUNT(*)
</code></pre>
<p><img src="https://gregshackles.com/content/images/2019/12/neo4j-dependsoncore.png" alt="neo4j-dependsoncore" loading="lazy"></p>
<p>So five projects don&apos;t depend on <code>Xamarin.Forms.Core</code>...what are they?</p>
<pre><code class="language-cypher">MATCH (dependent:Project)
WHERE NOT (dependent)-[:DEPENDS_ON*]-&gt;(:Project {name: &quot;Xamarin.Forms.Core&quot;})
RETURN DISTINCT dependent.name
</code></pre>
<p><img src="https://gregshackles.com/content/images/2019/12/neo4j-nodependsoncore.png" alt="neo4j-nodependsoncore" loading="lazy"></p>
<hr>
<p>This just scratches the surface of the types of queries you can start writing here, but even just the basics have already proven to be really interesting and valuable as I start to poke at the dependency graph in different ways to see what shakes out, especially when combined with domain-specific information about our projects. Not bad for a quick hack project!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Monitoring Akka.NET with Datadog and Phobos: Tracing]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In my <a href="https://gregshackles.com/monitoring-akka-net-with-datadog-and-phobos-metrics/">previous post</a> I started looking at how you can leverage Akka.NET&apos;s new Phobos product to start logging actor system metrics to Datadog. In this post I&apos;m going to start taking that a little further by exploring the tracing functionality it offers as well.</p>]]></description><link>https://gregshackles.com/monitoring-akka-net-with-datadog-and-phobos-tracing/</link><guid isPermaLink="false">61ce48a0437e8200017d4163</guid><category><![CDATA[Datadog]]></category><category><![CDATA[Akka.NET]]></category><category><![CDATA[Monitoring]]></category><category><![CDATA[Observability]]></category><category><![CDATA[Xamarin]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Fri, 30 Nov 2018 15:00:13 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In my <a href="https://gregshackles.com/monitoring-akka-net-with-datadog-and-phobos-metrics/">previous post</a> I started looking at how you can leverage Akka.NET&apos;s new Phobos product to start logging actor system metrics to Datadog. In this post I&apos;m going to start taking that a little further by exploring the tracing functionality it offers as well.</p>
<p>Similar to the metrics side, Phobos provides a flexible platform that supports a variety of tracing systems, such as Zipkin, Jaeger, Application Insights, and also <a href="https://opentracing.io/">the OpenTracing standard</a>. I&apos;m a big fan of OpenTracing, and it just so happens that Datadog&apos;s APM solution works well with it, so that&apos;s what I&apos;m going with here. Just like with metrics, though, if you wanted to go with a different platform then all you&apos;ll need to change is configuration, and your code remains the same.</p>
<h1 id="settingupopentracing">Setting Up OpenTracing</h1>
<p>I&apos;m going to stick with the <a href="https://github.com/gshackles/akka-samples/tree/master/Greeter">basic Greeter example</a> I used in the last post here, since for now I really just want to see what we get out of the box. In later posts we&apos;ll explore some more interesting actor systems.</p>
<p>Just as a reminder, this is all the actor actually does:</p>
<pre><code class="language-csharp">Receive&lt;Greet&gt;(msg =&gt;
    Console.WriteLine($&quot;Hello, {msg.Who}&quot;));
</code></pre>
<p>To set up logging to OpenTracing there&apos;s only a few quick things that need to be done. First we need these NuGet packages:</p>
<ul>
<li>OpenTracing</li>
<li>Datadog.Trace.OpenTracing</li>
</ul>
<p>Once those are installed, we can initialize a global tracer in our application prior to setting up the actor system:</p>
<pre><code class="language-csharp">var tracer = Datadog.Trace.OpenTracing.OpenTracingTracerFactory.CreateTracer();
GlobalTracer.Register(tracer);
</code></pre>
<p>This creates a tracer using Datadog&apos;s OpenTracing factory and registers it as the global tracer, which is a static instance. Any traces reported to this tracer will be logged down to the local Datadog agent on the machine.</p>
<p>Finally, just like everything else in Akka.NET we need to add a little bit of HOCON to tell it how to wire up tracing:</p>
<pre><code>phobos {
    tracing {
        provider-type = default
    }
}
</code></pre>
<p>By default here it&apos;ll look for that global tracer we registered, and if it finds one it&apos;ll use that. This is why you&apos;ll want to make sure to register it prior to spinning up your actor system.</p>
<h1 id="initialtraces">Initial Traces</h1>
<p>Since we didn&apos;t really need to do much there to get tracing going, this section is mainly going to be images and an introduction to Datadog&apos;s APM interface. Let&apos;s see what we&apos;ve got!</p>
<p>The first area you&apos;ll see is a list of services that registered traces:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-service-list.png" alt="APM service list" loading="lazy"></p>
<p>Akka.NET will register a service with a default name based on the application, which in our case becomes <code>greeter-csharp</code>. In this overview you can see a glimpse of average latency, how many requests the service is fulfilling, error rate, etc. This list becomes much more useful when you have a bunch of services, but you have to start somewhere.</p>
<p>Next, let&apos;s click through into the service itself and look at its own overview:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-service-overview.png" alt="APM service overview" loading="lazy"></p>
<p>This view gives a nice high level view of how the service is doing, showing the total requests, distribution of their latencies broken into percentiles, and statistics for specific resources within that service. Phobos will create a resource for each actor in your system automatically, so here you can see the <code>/user/greeter</code> actor showing up as a distinct resource here. In a normal actor system you&apos;d have a lot of different actors, so this would allow you to see them broken out individually.</p>
<p>Now let&apos;s click through into that actor resource and see its overview. Overall it&apos;ll look a lot like the previous screen, except that in the place of the resource list you&apos;ll find a list of actual traces that were captured for it:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-trace-list.png" alt="APM trace list" loading="lazy"></p>
<p>By default Phobos will sample 100% of all traces, but you configure this easily with one line of HOCON if you want to only sample a percentage of messages in your system (which is generally a good idea for production systems).</p>
<p>Now let&apos;s click into one of those traces and see what we get:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-initial-trace.png" alt="Initial APM trace" loading="lazy"></p>
<p>Here you can see a flame graph with one span, which isn&apos;t particularly interesting on its own, but you can still see it register the duration of the call into that actor. Below that you can see all the metadata Phobos recorded with the span, which includes things like the actor path, message type, the sender, and more. This information is powerful since later on we can query our traces based on this metadata.</p>
<p>One of the other nice things here is the host info tab, where you can see how the underlying host of that actor was doing at the time of the trace:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-host-info.png" alt="APM trace&apos;s host info" loading="lazy"></p>
<p>This allows you to correlate things like CPU or memory usage with what your application was actually doing at the time, which is also quite powerful.</p>
<h1 id="gettingdeeper">Getting Deeper</h1>
<p>That&apos;s a glimpse of what you get effectively for free - we didn&apos;t have to make any changes to our actors and we got all that great information - but that still only goes so far. What if we want to layer in some more information?</p>
<h2 id="tags">Tags</h2>
<p>In that initial trace we talked about the default metadata that Phobos provides for the trace. One of the nice things about that metadata is you can easily pile on your own data as well.</p>
<p>If you&apos;re familiar with most time-series databases (Datadog included), you&apos;re probably just as frustrated as I am about the inability to include high-cardinality fields in your metrics. That&apos;s still frustrating, but one awesome thing about Datadog&apos;s APM offering is that they offer infinite cardinality for tags there so you can tack on any information that&apos;s valuable to you. That&apos;s huge!</p>
<p>First, we&apos;ll want to get a reference to the Phobos actor context in our actor, just like we did in the last post to log custom metrics:</p>
<pre><code class="language-csharp">private readonly IPhobosActorContext _instrumentation = Context.GetInstrumentation();
</code></pre>
<p>With that in place, we can just add one line to our message handler to log a tag against the currently active span with the contents of the message:</p>
<pre><code class="language-csharp">Receive&lt;Greet&gt;(msg =&gt;
{
    _instrumentation.ActiveSpan.SetTag(&quot;who&quot;, msg.Who);

    Console.WriteLine($&quot;Hello, {msg.Who}&quot;);
});
</code></pre>
<p>What&apos;s nice here is that you don&apos;t need to do anything special to keep track of the current span you&apos;re operating under - Phobos handles that for you. Now when we look at the trace in Datadog you&apos;ll see the new bit of metadata from that tag:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-tag.png" alt="Custom tag logged in the trace" loading="lazy"></p>
<p>You&apos;ll want to be careful not to log sensitive information in these tags, naturally, but this is a really great way to include relevant diagnostic information to your traces to help you understand what your system was doing at that time.</p>
<h2 id="spans">Spans</h2>
<p>If you looked at that initial flame graph and found it uninteresting, I&apos;m with you - it&apos;s not much of a flame graph with only one span like that. Let&apos;s add some more!</p>
<p>Since we&apos;re still in this little contrived example of an actor system, let&apos;s just add a little bit of code that pretends to do some intermittent work during the message handler:</p>
<pre><code class="language-csharp">for (var i = 0; i &lt; 5; i++)
{
    using (_instrumentation.Tracer.BuildSpan(&quot;nap-time&quot;).WithTag(&quot;iteration&quot;, i).StartActive())
        System.Threading.Thread.Sleep(100);

    System.Threading.Thread.Sleep(50);
}
</code></pre>
<p>Here we loop five times, creating a span each time doing 100ms of &quot;work&quot;, tagging each one with the iterator&apos;s index, and pausing 50ms between each one. Don&apos;t do this in a real app, of course, but let&apos;s just pretend these sleeps are network or database calls. Now if we take a look at the flame graph it&apos;ll look a bit more interesting:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-flame-graph.png" alt="Flame graph with new spans" loading="lazy"></p>
<p>We can see the five nap spans explicitly, each tagged with the data we gave it. Another interesting view that Datadog provides is the list tab, which presents the same data in a different form:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-trace-list.png" alt="List view of new spans" loading="lazy"></p>
<p>Now we can start to get a better sense of where our actor was spending its time while it was handling the message, and in a nice visual way that doesn&apos;t require spelunking through a lot of log files to piece things together.</p>
<hr>
<p>While this is just the basics of what tracing offers, you can hopefully already see how powerful a tool it can be in really observing how your systems are behaving. That said, this sort of thing is generally referred to as &quot;distributed tracing&quot; for a reason, so in future posts we&apos;ll explore what this looks like with multiple actors that communicate with each other like a real system would!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Monitoring Akka.NET with Datadog and Phobos: Metrics]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>If you&apos;re here on my blog, you&apos;re probably well aware that I&apos;m a fan of both Akka.NET and Datadog, and observability in general. In fact, I even <a href="https://gregshackles.com/monitoring-akka-net-systems-with-datadog/">blogged last year about creating my own Datadog sink for Akka.Monitoring</a> (which is <a href="https://www.nuget.org/packages/Akka.Monitoring.Datadog/">still available</a></p>]]></description><link>https://gregshackles.com/monitoring-akka-net-with-datadog-and-phobos-metrics/</link><guid isPermaLink="false">61ce48a0437e8200017d4162</guid><category><![CDATA[Datadog]]></category><category><![CDATA[Akka.NET]]></category><category><![CDATA[Monitoring]]></category><category><![CDATA[Observability]]></category><category><![CDATA[Xamarin]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Wed, 28 Nov 2018 14:59:43 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>If you&apos;re here on my blog, you&apos;re probably well aware that I&apos;m a fan of both Akka.NET and Datadog, and observability in general. In fact, I even <a href="https://gregshackles.com/monitoring-akka-net-systems-with-datadog/">blogged last year about creating my own Datadog sink for Akka.Monitoring</a> (which is <a href="https://www.nuget.org/packages/Akka.Monitoring.Datadog/">still available on NuGet</a> and we still use it in production every day!).</p>
<p>This scratched some of my itches in terms of getting visibility into my actor systems, but it still fell a little short of what I wanted. For one, it still required adding code like this to my actors:</p>
<pre><code class="language-csharp">protected override void PreStart()
{
    Context.IncrementActorCreated();
    base.PreStart();
}
</code></pre>
<p>Definitely not the end of the world by any stretch, but still not ideal! It also doesn&apos;t play quite as nice with the F# side of things, where I like to define my actors as pure functions instead of using the OO APIs.</p>
<p>Additionally, when building distributed systems it becomes increasingly important to incorporate some sort of distributed tracing into the system to help you diagnose how it&apos;s behaving. This was definitely possible before as well, but would again require baking it all myself into all my actors.</p>
<h1 id="enterphobos">Enter Phobos</h1>
<p>This is why I got very excited when I saw Petabridge introduce <a href="https://phobos.petabridge.com/index.html">Phobos</a> earlier this year. Phobos aims to provide a stronger out-of-the-box offering around monitoring and distributed tracing, often without needing to make any actual changes to your actor code. Like everything else in Akka.NET, it&apos;s highly configurable, and cross-platform as well by nature of .NET Standard 2.0 (in a future post I&apos;ll certainly test this in a mobile app!). It also provides integrations into many well-known and established standards like StatsD, Application Insights, OpenTracing, and Zipkin.</p>
<p>Needless to say, this is all very relevant to my interests. As I start to really dig into Phobos and how to integrate it into my Datadog-driven world, I figured I&apos;d try to write up some of my experiences and what it looks like to actually use it.</p>
<h1 id="metrics">Metrics</h1>
<p>There&apos;s a lot of areas to explore, but I figured I&apos;d start with the basics: metrics. Ultimately I&apos;d really love to replace my usage of my NuGet packag mentioned earlier with Phobos, or at least be able to ditch all the custom calls like <code>Context.IncrementActorCreated()</code>. Since Datadog speaks StatsD, that seems like the best place to start.</p>
<h2 id="defaultmetrics">Default Metrics</h2>
<p>Since I really want to see the out-of-the-box experience, I&apos;ll start with the simplest actor system in the world and use my <a href="https://github.com/gshackles/akka-samples/tree/master/Greeter">Greeter sample</a> application. It contains a single actor that, given a name, echoes out <code>Hello, {name}</code> to the console...clearly the type of problem for which distributed systems were developed.</p>
<h3 id="c">C#</h3>
<p>I&apos;ll start with <a href="https://github.com/gshackles/akka-samples/blob/master/Greeter/Greeter.CSharp/Program.cs">the C# version</a> first, but then we&apos;ll check out the F# one to see if things work there too. First I&apos;ll need to add a couple new NuGet references to the project:</p>
<ul>
<li>Phobos.Actor</li>
<li>Phobos.Monitoring.StatsD</li>
</ul>
<p>With those installed, all I need to do is set up the HOCON configuration and use it when spinning up the actor system:</p>
<pre><code class="language-csharp">var config = ConfigurationFactory.ParseString(@&quot;
    akka.actor {
        provider = &quot;&quot;Phobos.Actor.PhobosActorRefProvider,Phobos.Actor&quot;&quot;
    }

    phobos {
        monitoring {
            provider-type = statsd
            statsd {
                endpoint = 127.0.0.1
                port = 8125
            }
        }
    }&quot;);
    
using (var system = ActorSystem.Create(&quot;my-system&quot;, config))
</code></pre>
<p>Here we specify that the actor provider should be the Phobos one, to use the StatsD monitoring provider, and where to find the StatsD listener. In my case it&apos;ll be the local Datadog agent running on the host. There are more options you can configure as well, including which actors you want to opt in/out of monitoring, but we&apos;ll just stick with the defaults and monitoring all the things.</p>
<h3 id="fakestatsdlistener">Fake StatsD Listener</h3>
<p>One thing I like to do when testing StatsD metrics is to set up a little TCP listener on that port that just spits out the messages it receives. StatsD is a dead simple protocol, so it can be a useful way to see what&apos;s being reported, as well as a good way to learn how the protocol works. Here&apos;s an example Node script I sometimes use to do this:</p>
<pre><code class="language-javascript">const dgram = require(&apos;dgram&apos;);
const server = dgram.createSocket(&apos;udp4&apos;);

const log = message =&gt; console.log(`[${new Date().toUTCString()}] ${message}`);

server.on(&apos;message&apos;, message =&gt; log(message.toString()));
server.on(&apos;listening&apos;, () =&gt; {
    var address = server.address();

    log(`UDP Server listening on ${address.address}:${address.port}`);
});

server.bind(8125, &apos;127.0.0.1&apos;);
</code></pre>
<p>With that running, let&apos;s fire up the actor system and see what we get:</p>
<pre><code>[Wed, 28 Nov 2018 04:33:35 GMT] UDP Server listening on 127.0.0.1:8125
[Wed, 28 Nov 2018 04:33:46 GMT] 
my-system.my-system.akka.actor.created:1|c
my-system.akka.actor.created:1|c
my-system.user.greeter.messages.received:1|c
my-system.my-system.user.greeter.messages.received:1|c
my-system.Greeter.CSharp.GreetingActor.messages.received:1|c
my-system.my-system.Greeter.CSharp.GreetingActor.messages.received:1|c
my-system.my-system.akka.messages.received:1|c
my-system.akka.messages.received:1|c
</code></pre>
<p>Remember that all we changed was configuration, and not any actor code! Out of the box we got metrics reported for the creation of the system, the actor, and a variety of variants of the messages received. I&apos;ll be looking for ways to consolidate this down and make use of tags to clean it up, similar to what I did in my NuGet package, but this is a great start!</p>
<h3 id="f">F#</h3>
<p>Ok, let&apos;s try this in F#!</p>
<pre><code class="language-fsharp">let config = ConfigurationFactory.ParseString(&quot;&quot;&quot;
    akka.actor {
        provider = &quot;Phobos.Actor.PhobosActorRefProvider,Phobos.Actor&quot;
    }

    phobos {
        monitoring {
            provider-type = statsd
            statsd {
                endpoint = 127.0.0.1
                port = 8125
            }
        }
    }&quot;&quot;&quot;);

use system = ActorSystem.Create(&quot;my-system&quot;, config)
</code></pre>
<p>This is basically the same code we had in C# - wire up the system using HOCON and we&apos;re off to the races:</p>
<pre><code>[Wed, 28 Nov 2018 04:42:22 GMT] 
my-system.my-system.akka.actor.created:1|c
my-system.akka.actor.created:1|c
my-system.user.greeter.messages.received:1|c
my-system.my-system.user.greeter.messages.received:1|c
my-system.Akka.FSharp.Actors+FunActor`2[[Program+Message, Greeter.FSharp, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null],[System.Object, System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].messages.received:1|c
my-system.my-system.akka.messages.received:1|c
my-system.akka.messages.received:1|c
</code></pre>
<p>Pretty much the same thing! The amusing exception here is where it tries to create a metric using the actor&apos;s class name, which has interesting results for function-based actors :) Either way, it&apos;s awesome that the underlying system lit up and started logging these metrics without changing any actor code.</p>
<h3 id="hookupdatadog">Hook up Datadog</h3>
<p>Now that we&apos;ve got StatsD metrics printing to the console, I&apos;ll swap out my little UDP listener with the actual Datadog agent. Nothing has to change in the actor system configuration for this - I just have to turn off the custom listener and start the Datadog service.</p>
<p>Once that&apos;s up and running, the metrics start appearing in Datadog as expected and I can start creating alerts and dashboards based on them:</p>
<p><img src="https://gregshackles.com/content/images/2018/11/phobos-datadog-metrics.png" alt="Metrics showing up in Datadog" loading="lazy"></p>
<h2 id="moremetrics">More Metrics</h2>
<p>That&apos;s what you get by default, but what if you want to sprinkle in some more metric goodness? Let&apos;s do that.</p>
<h3 id="mailboxlength">Mailbox Length</h3>
<p>One of the things I like to have an eye on in my systems is the length of an actor&apos;s mailbox, in order to get an indication of whether it&apos;s falling behind on processing or something is wrong. Phobos actually makes that dead simple, by exposing it as a configuration property. Just update the <code>monitoring</code> block in the HOCON and you&apos;re good to go:</p>
<pre><code>phobos {
    monitoring {
        monitor-mailbox-depth = on
</code></pre>
<p>WIth that in place you&apos;ll see metrics like these being reported as gauges:</p>
<pre><code>my-system.user.greeter.mailbox.queuelength:0|g
my-system.my-system.user.greeter.mailbox.queuelength:0|g
my-system.my-system.Greeter.CSharp.GreetingActor.mailbox.queuelength:0|g
</code></pre>
<h3 id="custommetrics">Custom Metrics</h3>
<p>You&apos;ll probably also come across situations where you want to log your own  custom metrics as well. If you&apos;re using something like StatsD you could do that directly through that, but wouldn&apos;t it be nicer to be able to log your custom metrics through the same pipeline as the rest of the Phobos metrics?</p>
<p>The story here is a bit better in C# than F#, for similar reasons to the old Akka.Monitoring stuff. What you can do is use an actor&apos;s context to get its Phobos context:</p>
<pre><code class="language-csharp">private readonly IPhobosActorContext _instrumentation = Context.GetInstrumentation();
</code></pre>
<p>Once you have that, you can use its <code>Monitor</code> property to send metrics through the system:</p>
<pre><code class="language-csharp">_instrumentation.Monitor.IncrementCounter(&quot;awesome-counter&quot;, 1);
</code></pre>
<p>With that in place you&apos;ll see it come through like all the rest of the counters:</p>
<pre><code>my-system.user.greeter.awesome-counter:1|c
my-system.my-system.user.greeter.awesome-counter:1|c
my-system.Greeter.CSharp.GreetingActor.awesome-counter:1|c
my-system.my-system.Greeter.CSharp.GreetingActor.awesome-counter:1|c
</code></pre>
<p>The API exposed on <code>IMonitor</code> doesn&apos;t currently allow for passing through tags with a metric, but I&apos;m hoping this can be added in future versions. Either way, with that one line of code we&apos;ve got a custom metric going through Phobos to StatsD and ultimately Datadog. If down the line we wanted to switch to Application Insights or anything else, it would just be a configuration change and everything else would stay the same.</p>
<hr>
<p>There&apos;s a lot more to explore with Phobos, but it&apos;s exciting to see this sort of functionality starting to get baked right into the framework (and its supporting packages). In the next post I&apos;ll start to look at some of the distributed tracing functionality available in Phobos, and how we can expose that in Datadog&apos;s APM tools.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Tracking Identity Column Saturation in SQL Server with Datadog]]></title><description><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>Int32 ought to be enough for any table&apos;s identity column</p>
<p>-- <cite>Most developers at some point</cite></p>
</blockquote>
<p>We&apos;ve all done it, creating a new table in SQL Server and giving it a nice auto-incrementing integer as the primary key. There&apos;s no way that table will</p>]]></description><link>https://gregshackles.com/tracking-identity-column-saturation-in-sql-server-with-datadog/</link><guid isPermaLink="false">61ce48a0437e8200017d4161</guid><category><![CDATA[Datadog]]></category><category><![CDATA[SQL]]></category><category><![CDATA[Monitoring]]></category><category><![CDATA[Observability]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Thu, 19 Jul 2018 18:15:05 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<p>Int32 ought to be enough for any table&apos;s identity column</p>
<p>-- <cite>Most developers at some point</cite></p>
</blockquote>
<p>We&apos;ve all done it, creating a new table in SQL Server and giving it a nice auto-incrementing integer as the primary key. There&apos;s no way that table will ever reach <em>2,147,483,647</em> rows, right? Now, for most tables that&apos;s likely true, but the last thing you want is to be surprised when suddenly you can no longer insert into your table due to using up that full Int32 space.</p>
<p>In our system there are some tables we knew would be reaching those limits in the relatively near future, and as part of preparing for the migration I wanted to get some good observability on how close we were to the limits and how it was trending. Again, the last thing we wanted was to be surprised. Naturally for me this meant wanting to get this into Datadog so it can easily be visualized and alerted on.</p>
<h1 id="definingtheagentjob">Defining the Agent Job</h1>
<p>One way to implement this would have been to create a normal Datadog agent check and run the query that way. This time around I wanted to see how easily I could do it via scheduled SQL Server Agent jobs and report the metric down to the local Datadog agent via UDP directly. You can define agent jobs in PowerShell, so it ended up being very straightforward.</p>
<p>First I defined a couple variables to declare which database and table to look at:</p>
<pre><code class="language-powershell">$table = &apos;MyTable&apos;
$database = &apos;MyDatabase&apos;
</code></pre>
<p>For now this is just limited to one table, but sets the stage for being able to layer in more down the line, perhaps simply querying everything.</p>
<p>Next, I used <code>IDENT_CURRENT</code> to grab the current identity value for that table, and calculate the percentage used (note that this is assuming an integer column type here):</p>
<pre><code class="language-powershell">$pctUsed = (Invoke-Sqlcmd `
             -Query &quot;SELECT (IDENT_CURRENT(&apos;$database.dbo.$table&apos;) / 2147483647) * 100 
                     AS PctUsed&quot; `
             -ServerInstance &quot;sql-server-host&quot; `
           ).PctUsed
</code></pre>
<p>With that I construct a metric, tagged with the database and table, and send that to the local Datadog agent:</p>
<pre><code class="language-powershell">$message = &quot;olo.database.identity_space_used:$pctUsed|g|#table:$table,database:$database&quot;
$messageBytes = [Text.Encoding]::ASCII.GetBytes($message)

$socket = New-Object System.Net.Sockets.UDPClient 
$socket.Send($messageBytes, $messageBytes.Length, &quot;127.0.0.1&quot;, 8125) 
$socket.Close() 
</code></pre>
<p>And that&apos;s it! I can set this agent job on a schedule and get regularly reported metrics flowing into Datadog to track the column saturation where I need it. Since this is PowerShell I could also have just pulled in the Datadog client library as well, but in this case it was nicer to keep it dependency-free and construct the message manually, since it&apos;s straightforward enough.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Styling Xamarin.Forms Apps with CSS]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Some months ago a feature landed in Xamarin.Forms that seemed to truly polarize the Xamarin.Forms community: support for <a href="https://blog.xamarin.com/update-to-xamarin-forms-3-0-pre-release-available-today/#stylesheets">styling applications using CSS</a>. Some argued that it was an unnecessary introduction to &quot;Web&quot; technology to the native development experience, and others that it simply isn&apos;t</p>]]></description><link>https://gregshackles.com/styling-xamarin-forms-apps-with-css/</link><guid isPermaLink="false">61ce48a0437e8200017d415f</guid><category><![CDATA[Xamarin]]></category><category><![CDATA[CSS]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Tue, 05 Jun 2018 16:57:44 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Some months ago a feature landed in Xamarin.Forms that seemed to truly polarize the Xamarin.Forms community: support for <a href="https://blog.xamarin.com/update-to-xamarin-forms-3-0-pre-release-available-today/#stylesheets">styling applications using CSS</a>. Some argued that it was an unnecessary introduction to &quot;Web&quot; technology to the native development experience, and others that it simply isn&apos;t the right solution to the problem.</p>
<p>While I sympathize with the latter opinion and think there&apos;s plenty of room for some good debate on the right path forward, I count myself as part of a third camp: I think that CSS is a powerful (and frequently maligned) solution to the problem of styling native mobile applications.</p>
<hr>
<p>Read the rest of the article <a href="https://visualstudiomagazine.com/articles/2018/04/01/styling-xamarin-forms.aspx">over at Visual Studio Magazine</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Writing .NET Core Global Tools with F#]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The <a href="https://blogs.msdn.microsoft.com/dotnet/2018/05/30/announcing-net-core-2-1/">release of .NET Core 2.1</a> brought with it a bunch of great additions, and one of the ones I&apos;ve been looking forward to the most is the addition of support for creating global tools. This has always been a great feature in the JavaScript world, allowing</p>]]></description><link>https://gregshackles.com/writing-net-core-global-tools-with-fsharp/</link><guid isPermaLink="false">61ce48a0437e8200017d415e</guid><category><![CDATA[F#]]></category><category><![CDATA[.NET Core]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Thu, 31 May 2018 13:30:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The <a href="https://blogs.msdn.microsoft.com/dotnet/2018/05/30/announcing-net-core-2-1/">release of .NET Core 2.1</a> brought with it a bunch of great additions, and one of the ones I&apos;ve been looking forward to the most is the addition of support for creating global tools. This has always been a great feature in the JavaScript world, allowing you to write and distribute command-line tools via npm.</p>
<p>F# has long been my favorite go-to language for scripting, and as such I&apos;ve compiled quite a few F# scripts that I run regularly for a variety of things. With the addition of global tooling support in .NET Core, now I can create actual command-line tools from those scripts and even distribute them via NuGet!</p>
<h1 id="introducingfsharpsay">Introducing: fsharpsay</h1>
<p>I&apos;ll admit up front that in order to come up with an easy example to show off, I decided to reinterpret <a href="https://github.com/dotnet/core/tree/master/samples/dotnetsay">Microsoft&apos;s <code>dotnetsay</code> example</a> from the aforementioned blog post. This one will be totally different, though...I&apos;ll use the F# logo instead of .NET Bot! Let&apos;s take a look at how easy it is to create a new tool.</p>
<p>First, we&apos;ll create a new F# console app:</p>
<pre><code>dotnet new console -lang F#
</code></pre>
<p>In the generated <code>fsproj</code> file add a <code>PackAsTool</code> element such that the whole file looks like:</p>
<pre><code class="language-xml">&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;
  &lt;PropertyGroup&gt;
    &lt;OutputType&gt;Exe&lt;/OutputType&gt;
    &lt;TargetFramework&gt;netcoreapp2.1&lt;/TargetFramework&gt;
    &lt;RootNamespace&gt;FSharpSay&lt;/RootNamespace&gt;
    &lt;PackAsTool&gt;true&lt;/PackAsTool&gt;
  &lt;/PropertyGroup&gt;

  &lt;ItemGroup&gt;
    &lt;Compile Include=&quot;Program.fs&quot; /&gt;
  &lt;/ItemGroup&gt;
&lt;/Project&gt;
</code></pre>
<p>Next we&apos;ll create a function that takes in a message and prints it coming from the F# logo:</p>
<pre><code class="language-fsharp">let sayIt = printfn @&quot;
            %s
            __________________
                              \
                                \
                                `  ` 
                              `/+  /:` 
                            `/ss+  /++:`
                          `/ssss+  /++++:.
                        ./ssssss+  /++++++/.
                      ./ssssssss+  /++++++++/.
                    ./ssssssssss+  /++++++++++/.
                  ./ssssssssssss+  +++++++++++++/-
                ./ssssssssssssss/  /++++++++++++++/-`
              ./ssssssssssssss+.    ./+++++++++++++++-`
            ./ssssssssssssss+.        ./+++++++++++++++-`
          .+ssssssssssssss+.  `:+       ./+++++++++++++++:`
        .+ssssssssssssss+.  `:os+         ./+++++++++++++++:`
      .+ssssssssssssss+.  `:osss+           ./+++++++++++++++:`
    .+ssssssssssssss+.  `:osssss+             ./+++++++++++++++/.
  .+ssssssssssssss+.  `:osssssss+               ./+++++++++++++++/.
  /sssssssssssssso-   .osssssssss+                 -++++++++++++++++:
  -+ssssssssssssso/`  `:osssssss+               `:+++++++++++++++/.
    -+ssssssssssssso/`  `:osssss+             `:+++++++++++++++/.
      -+ssssssssssssss/.  `:osss+           `:+++++++++++++++/.
        -+ssssssssssssss/.  `:os+         .:+++++++++++++++:.
          -+ssssssssssssss/.  `:+       ./+++++++++++++++:`
            .+ssssssssssssss/.  `     ./+++++++++++++++:`
              .+ssssssssssssss/.    ./+++++++++++++++:`
                .+ssssssssssssss/  :+++++++++++++++-`
                  .+ssssssssssss+  /+++++++++++++-` 
                    .+ssssssssss+  /++++++++++/-` 
                      .+ssssssss+  /++++++++/-  
                        .+ssssss+  /++++++/. 
                          .+ssss+  /++++/. 
                            .+ss+  /++/. 
                              .++  /:. 
                                `  `      &quot;
</code></pre>
<p>F# is known for brevity, but unfortunately there&apos;s not much that can be done with the big logo!</p>
<p>Finally, the entry point for the app:</p>
<pre><code class="language-fsharp">[&lt;EntryPoint&gt;]
let main argv =
    match argv with 
        | [|message|] -&gt; message
        | _ -&gt; &quot;F# rocks!&quot;
    |&gt; sayIt

    0
</code></pre>
<p>That&apos;s actually all you need. A global tool really is just a console app, meaning you can even test it out via <code>dotnet run</code> the way you normally would for a console app.</p>
<h1 id="packagingandinstalling">Packaging and Installing</h1>
<p>Now let&apos;s go ahead and create a redistributable tool out of the app by creating a NuGet package:</p>
<pre><code>dotnet pack -c release -o nupkg
</code></pre>
<p>This will create a standard <code>nupkg</code> package for the app, which you could then push to NuGet itself or any other feed of your choosing. For now, let&apos;s just install it from the local file:</p>
<pre><code>dotnet tool install --add-source ./nupkg -g fsharpsay
</code></pre>
<p>By specifying <code>-g</code>, we&apos;re saying that we want this tool to be available globally on the machine.</p>
<h1 id="runningthetool">Running the Tool</h1>
<p>Now that it&apos;s installed, we can go ahead and run it from the command line the same way you would any other app in your path:</p>
<pre><code>&gt; fsharpsay &quot;F# rocks&quot;

            F# rocks
            __________________
                              \
                                \
                                `  ` 
                              `/+  /:` 
                            `/ss+  /++:`
                          `/ssss+  /++++:.
                        ./ssssss+  /++++++/.
                      ./ssssssss+  /++++++++/.
                    ./ssssssssss+  /++++++++++/.
                  ./ssssssssssss+  +++++++++++++/-
                ./ssssssssssssss/  /++++++++++++++/-`
              ./ssssssssssssss+.    ./+++++++++++++++-`
            ./ssssssssssssss+.        ./+++++++++++++++-`
          .+ssssssssssssss+.  `:+       ./+++++++++++++++:`
        .+ssssssssssssss+.  `:os+         ./+++++++++++++++:`
      .+ssssssssssssss+.  `:osss+           ./+++++++++++++++:`
    .+ssssssssssssss+.  `:osssss+             ./+++++++++++++++/.
  .+ssssssssssssss+.  `:osssssss+               ./+++++++++++++++/.
  /sssssssssssssso-   .osssssssss+                 -++++++++++++++++:
  -+ssssssssssssso/`  `:osssssss+               `:+++++++++++++++/.
    -+ssssssssssssso/`  `:osssss+             `:+++++++++++++++/.
      -+ssssssssssssss/.  `:osss+           `:+++++++++++++++/.
        -+ssssssssssssss/.  `:os+         .:+++++++++++++++:.
          -+ssssssssssssss/.  `:+       ./+++++++++++++++:`
            .+ssssssssssssss/.  `     ./+++++++++++++++:`
              .+ssssssssssssss/.    ./+++++++++++++++:`
                .+ssssssssssssss/  :+++++++++++++++-`
                  .+ssssssssssss+  /+++++++++++++-` 
                    .+ssssssssss+  /++++++++++/-` 
                      .+ssssssss+  /++++++++/-  
                        .+ssssss+  /++++++/. 
                          .+ssss+  /++++/. 
                            .+ss+  /++/. 
                              .++  /:. 
                                `  `
</code></pre>
<h1 id="availableonnuget">Available on NuGet</h1>
<p>Clearly this is a must-have tool that the world needs, so I went ahead and made the <a href="https://github.com/gshackles/fsharpsay">source available on GitHub</a> and <a href="https://www.nuget.org/packages/FSharpSay">published the tool to NuGet</a>. To install it, simply run this and you&apos;re good to go:</p>
<pre><code>dotnet tool install -g fsharpsay
</code></pre>
<p>Now if you&apos;ll excuse me, I&apos;ve got some more global tools to convert!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Checking Savings Bond Values with F#, Docker, and Azure]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When I was growing up some of my family insisted on giving me savings bonds for things like Christmas and my birthday, in what I assume is was a frugal way to teach me about very (very) delayed gratification. I still have a bunch, but the value isn&apos;t</p>]]></description><link>https://gregshackles.com/checking-savings-bond-values-with-f-docker-and-azure/</link><guid isPermaLink="false">61ce48a0437e8200017d415d</guid><category><![CDATA[Azure]]></category><category><![CDATA[F#]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Sun, 14 Jan 2018 22:42:04 GMT</pubDate><media:content url="https://gregshackles.com/content/images/2018/01/image-1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://gregshackles.com/content/images/2018/01/image-1.png" alt="Checking Savings Bond Values with F#, Docker, and Azure"><p>When I was growing up some of my family insisted on giving me savings bonds for things like Christmas and my birthday, in what I assume is was a frugal way to teach me about very (very) delayed gratification. I still have a bunch, but the value isn&apos;t particularly high and the process of checking their value is tedious, so I generally only end up checking it maybe every five years or so.</p>
<p>I figured it was about time to check again, but this time I wanted to think about what I could automate. The idea: read in a list of bonds from a CSV file, automatically enter those into the website for checking their value, and report their total value to me. Even having to remember to run this manually seems a bit tedious, so I also want it to automatically run every month and email me the results.</p>
<h1 id="theapp">The App</h1>
<p>For the app itself I decided to do it as an F# console app, and take advantage of <a href="https://lefthandedgoat.github.io/canopy/">canopy</a>, a lovely library for automated browser testing. Using FSharp.Data&apos;s CSV type provider, I defined a simple time to match what&apos;s in the data file:</p>
<pre><code class="language-lang-fsharp">type Bonds = CsvProvider&lt;HasHeaders = false, Schema = &quot;SerialNumber(string),IssueDate(string)&quot;&gt;
</code></pre>
<p>Next, I&apos;ll start running Chrome and browse to the site:</p>
<pre><code class="language-lang-fsharp">start chrome
url &quot;https://www.treasurydirect.gov/BC/SBCPrice&quot;
</code></pre>
<p>Now I want to enter each bond into the form on the page and submit it. Canopy makes this really easy and consise:</p>
<pre><code class="language-lang-fsharp">Bonds.Load(&quot;c:\\bonds.csv&quot;).Rows
    |&gt; Seq.iter (fun bond -&gt;
        let denomination = match bond.SerialNumber with
                           | serial when serial.StartsWith(&quot;L&quot;) -&gt; 50
                           | serial when serial.StartsWith(&quot;C&quot;) -&gt; 100
                           | _ -&gt; failwith &quot;Unknown bond denomination&quot;

        &quot;select[name=Denomination]&quot; &lt;&lt; string denomination
        &quot;input[name=SerialNumber]&quot; &lt;&lt; bond.SerialNumber
        &quot;input[name=IssueDate]&quot; &lt;&lt; bond.IssueDate
        click &quot;input[name=&apos;btnAdd.x&apos;]&quot;
)
</code></pre>
<p>With just those few lines of code, all the bonds will have been entered into the site and the totals are now available to read out:</p>
<pre><code class="language-lang-fsharp">let totals = elements &quot;table#ta1 tr:nth-child(2) td&quot;
let lines = seq {
    yield sprintf &quot;Total Value: %s&quot; totals.[1].Text
    yield sprintf &quot;Total Price Paid: %s&quot; totals.[0].Text
    yield sprintf &quot;Total Interest: %s&quot; totals.[2].Text
    yield sprintf &quot;YTD Interest: %s&quot; totals.[3].Text
} 
let content = lines |&gt; String.concat &quot;&lt;br /&gt;&quot;
</code></pre>
<p>For now this will just be a boring little report, with these four items printed on separate lines. Finally, all we need to do is send the email and quit the browser. To send the email I&apos;m using a SendGrid account:</p>
<pre><code class="language-lang-fsharp">let client = SendGridClient(Environment.GetEnvironmentVariable &quot;SendGridApiKey&quot;)
let emailAddress = EmailAddress(&quot;greg@gregshackles.com&quot;, &quot;Savings Bond Calculator&quot;)

MailHelper.CreateSingleEmail(emailAddress, emailAddress, &quot;Savings Bond Values&quot;, content, content)
|&gt; client.SendEmailAsync
|&gt; Async.AwaitTask
|&gt; Async.Ignore
|&gt; Async.RunSynchronously

quit()
</code></pre>
<p>That&apos;s the entire app! Running it successfully reminds me how little my stack of bonds is worth. I did say I wanted to run this on a schedule, though, so let&apos;s kick things up a notch.</p>
<h1 id="dockerizingtheapp">Dockerizing the App</h1>
<p>In order to run this easily on a machine other than my own, I decided to go ahead and create a Docker image for it that I can run in Azure. Here&apos;s my <code>Dockerfile</code>:</p>
<pre><code class="language-lang-text">FROM microsoft/windowsservercore
COPY bin/Release/ /

ADD http://chromedriver.storage.googleapis.com/2.35/chromedriver_win32.zip /
ADD https://dl.google.com/tag/s/dl/chrome/install/googlechromestandaloneenterprise64.msi /

RUN msiexec /i googlechromestandaloneenterprise64.msi /quiet

ENTRYPOINT BondCalculator.exe
</code></pre>
<p>I create a Windows image, copy over the compiled app, install Chrome and the Chrome web driver, and then finally run the task. Once the app finishes the image will terminate, since there&apos;s no reason to keep anything running most of the time for this.</p>
<p>I also threw together a little Powershell script to build and create the image:</p>
<pre><code class="language-lang-powershell">Invoke-Expression &quot;msbuild /p:VisualStudioVersion=14.0 /p:Configuration=Release&quot;

Copy-Item C:\bonds.csv bin\Release\bonds.csv

Invoke-Expression &quot;docker build -t bond-calculator .&quot;
</code></pre>
<p>Once that runs we now have a ginormous 11.2GB image for the app:</p>
<pre><code>&gt; docker images --format &quot;{{.Repository}}: {{.Size}}&quot;
bond-calculator: 11.2GB
</code></pre>
<p>And now we can run the app with <code>docker run</code>:</p>
<pre><code>docker run --env SendGridApiKey=&lt;key&gt; bond-calculator
</code></pre>
<h1 id="pushingtoazure">Pushing to Azure</h1>
<p>I have a private container registry in Azure that I can use to store my containers, so I need to push this image out to that:</p>
<pre><code>docker tag bond-calculator myregistry.azurecr.io/bond-calculator
docker push myregistry.azurecr.io/bond-calculator
</code></pre>
<p>With that pushed, I can now use the Azure CLI to create a new container instance to actually run it in Azure:</p>
<pre><code>az container create `
    --resource-group mygroup `
    --name bond-calculator `
    --image myregistry.azurecr.io/bond-calculator:latest `
    --cpu 1 --memory 1 `
    --registry-password topsecret `
    --restart-policy OnFailure `
    --os-type Windows `
    --environment-variables SendGridApiKey=&lt;key&gt;
</code></pre>
<p>This creates a new container with 1 CPU and 1GB of memory and runs my app on it. Sure enough, a few minutes later a new email report shows up in my inbox! Using the Azure CLI again we can look up more details about the container:</p>
<pre><code>&gt; az container show --resource-group gshackles --name bond-calculator
...
        &quot;currentState&quot;: {
          &quot;detailStatus&quot;: &quot;Completed&quot;,
          &quot;exitCode&quot;: 0,
          &quot;finishTime&quot;: &quot;2018-01-14T16:34:53+00:00&quot;,
          &quot;startTime&quot;: &quot;2018-01-14T16:34:15+00:00&quot;,
          &quot;state&quot;: &quot;Terminated&quot;
        },
...
</code></pre>
<p>The container ran for 38 seconds, so that&apos;s all I&apos;m going to have to pay for. I think I can live with that.</p>
<h1 id="runningonaschedule">Running On A Schedule</h1>
<p>Now that I&apos;ve got it running in Azure, I want to get it running on a schedule so that I don&apos;t need to actually trigger it myself. At first I figured I&apos;d just write up a quick Azure Function to do it, but that didn&apos;t pan out so well. I had planned to just write a little Powershell function that executes monthly and issues the Powershell equivalent of that <code>az container create</code> command above, but it turns out that the <code>AzureRM</code> modules loaded in the function host are pretty old, and don&apos;t yet include the cmdlets for container instances.</p>
<p>After toying around with a few different hacky ideas, I realized that Logic Apps actually have support for doing just what I need. I put together a simple little workflow that triggers monthly and creates a container instance with the same parameters as earlier:</p>
<p><img src="https://gregshackles.com/content/images/2018/01/scheduled-logic-app.png" alt="Checking Savings Bond Values with F#, Docker, and Azure" loading="lazy"></p>
<p>Just to make sure it all still worked, I triggered a run for the logic app and sure enough, a new email arrived a little while after:</p>
<p><img src="https://gregshackles.com/content/images/2018/01/savings-bond-email.png" alt="Checking Savings Bond Values with F#, Docker, and Azure" loading="lazy"></p>
<hr>
<p>Was this all overkill for what I actually needed here? Almost certainly! In the end, it was a fun exercise to put all these pieces together like this, and turned out to be a pretty great option for these types of scenarios where I need something running on a schedule and it can&apos;t be supported by Azure Functions.</p>
<p>For anyone interested, the code for this app is <a href="https://github.com/gshackles/bond-calculator">all available on GitHub</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Getting Started with Augmented Reality in iOS 11]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Over the last several years augmented reality (AR) has become a hot topic across all platforms and technology sectors. Apple&apos;s release of iOS 11 included a new framework called ARKit that aims to make it easy for developers to add AR experiences into their apps without a lot</p>]]></description><link>https://gregshackles.com/getting-started-with-augmented-reality-in-ios-11/</link><guid isPermaLink="false">61ce48a0437e8200017d415c</guid><category><![CDATA[Xamarin]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Fri, 08 Dec 2017 15:08:31 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Over the last several years augmented reality (AR) has become a hot topic across all platforms and technology sectors. Apple&apos;s release of iOS 11 included a new framework called ARKit that aims to make it easy for developers to add AR experiences into their apps without a lot of hassle or ceremony. While it&apos;s still a little limited in its initial form, Apple was still able to create an approachable framework for incorporating AR into apps, even for developers without much (if any) 2-D or 3-D programming experience.</p>
<p>In this article, I won&apos;t dive deep into how ARKit works. Instead, I&apos;ll walk through creating an AR app from scratch to demonstrate how it fits together. Because I&apos;ve always thought my household could use more minions (of the &quot;Despicable Me&quot; variety) hanging around, I&apos;ll create an app that allows me to place a 3-D minion anywhere I&apos;d like just by tapping on a spot in my house from within the app.</p>
<p>Check out the rest of the article over at <a href="https://visualstudiomagazine.com/articles/2017/12/06/ar-in-ios-11.aspx">Visual Studio Magazine</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A Mobile DevOps Retrospective, Part III: Measurement, the Last Mile]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>As I mentioned recently in parts <a href="https://gregshackles.com/a-mobile-devops-retrospective-part-i-150-apps-later/">one</a> and <a href="https://gregshackles.com/a-mobile-devops-retrospective-part-ii-automation/">two</a>, I recently had the pleasure of writing a series of guest posts for Microsoft&apos;s App Center blog about mobile DevOps and our experiences at Olo. The third (and final) post in the series is now available, enjoy!</p>
<p><a href="https://aka.ms/Uoenf2">A Mobile</a></p>]]></description><link>https://gregshackles.com/a-mobile-devops-retrospective-part-iii-measurement-the-last-mile/</link><guid isPermaLink="false">61ce48a0437e8200017d415b</guid><category><![CDATA[Xamarin]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[App Center]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Wed, 06 Dec 2017 02:30:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>As I mentioned recently in parts <a href="https://gregshackles.com/a-mobile-devops-retrospective-part-i-150-apps-later/">one</a> and <a href="https://gregshackles.com/a-mobile-devops-retrospective-part-ii-automation/">two</a>, I recently had the pleasure of writing a series of guest posts for Microsoft&apos;s App Center blog about mobile DevOps and our experiences at Olo. The third (and final) post in the series is now available, enjoy!</p>
<p><a href="https://aka.ms/Uoenf2">A Mobile DevOps Retrospective, Part III: Measurement, the Last Mile</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A Mobile DevOps Retrospective Part II: Automation]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>As I <a href="https://gregshackles.com/a-mobile-devops-retrospective-part-i-150-apps-later/">blogged about recently</a>, I had the pleasure of writing a series of guest posts for Microsoft&apos;s Mobile Center blog about mobile DevOps and our experiences at Olo. I&apos;m happy to share that part two is now available, and the third will be out soon.</p>]]></description><link>https://gregshackles.com/a-mobile-devops-retrospective-part-ii-automation/</link><guid isPermaLink="false">61ce48a0437e8200017d415a</guid><category><![CDATA[Xamarin]]></category><category><![CDATA[Mobile Center]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Tue, 31 Oct 2017 15:24:50 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>As I <a href="https://gregshackles.com/a-mobile-devops-retrospective-part-i-150-apps-later/">blogged about recently</a>, I had the pleasure of writing a series of guest posts for Microsoft&apos;s Mobile Center blog about mobile DevOps and our experiences at Olo. I&apos;m happy to share that part two is now available, and the third will be out soon. Enjoy!</p>
<p><a href="https://aka.ms/R3ulra">A Mobile DevOps Retrospective Part II: Automation</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using Akka.NET Actor Systems in Xamarin Apps]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="http://getakka.net/">Akka.NET</a> is a great toolkit for building concurrent and fault-tolerant systems by way of the actor model. Most think of actor systems as something you would just do on the server side of things, as part of building large distributed systems, but the approach works great for all sorts</p>]]></description><link>https://gregshackles.com/using-akka-net-in-xamarin-apps/</link><guid isPermaLink="false">61ce48a0437e8200017d4159</guid><category><![CDATA[Xamarin]]></category><category><![CDATA[Xamarin.Forms]]></category><category><![CDATA[Akka.NET]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Sun, 08 Oct 2017 18:41:08 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><a href="http://getakka.net/">Akka.NET</a> is a great toolkit for building concurrent and fault-tolerant systems by way of the actor model. Most think of actor systems as something you would just do on the server side of things, as part of building large distributed systems, but the approach works great for all sorts of applications. The recent release of <a href="https://petabridge.com/blog/akkdotnet-v13-dotnetcore/">Akka.NET 1.3</a> brought with it support for .NET Standard 1.6, so naturally I needed to try using it from a Xamarin app.</p>
<p>To give it a spin, I&apos;ll build a simple Xamarin.Forms app that takes a URL and begins crawling it for links, reports back what it finds, crawls those new links, and so on. Something like this wouldn&apos;t be too difficult to model without the actor model, but actors work really well for this type of scenario.</p>
<h1 id="messagetypes">Message Types</h1>
<p>Let&apos;s define some message types for communicating progress during the crawl. First, a command to initiate scraping a given URL:</p>
<pre><code class="language-csharp">public class Scrape 
{
    public string Url { get; }

    public Scrape(string url) =&gt; Url = url;
}
</code></pre>
<p>Then we&apos;ll need a message to communicate the contents of a URL after it&apos;s downloaded:</p>
<pre><code class="language-csharp">public class DownloadUrlResult
{
    public string Html { get; }

    public DownloadUrlResult(string html) =&gt; Html = html;
}
</code></pre>
<p>Finally we need a message to communicate the final results of scraping a URL:</p>
<pre><code class="language-csharp">public class ScrapeResult
{
    public string Url { get; }
    public string Title { get; }
    public IList&lt;string&gt; LinkedUrls { get; }

    public ScrapeResult(string url, string title, IList&lt;string&gt; linkedUrls)
    {
        Url = url;
        Title = title;
        LinkedUrls = linkedUrls;
    }
}
</code></pre>
<h1 id="theactorsystem">The Actor System</h1>
<p>Now we can define some actors to do the work.</p>
<h2 id="scrapeactor">ScrapeActor</h2>
<p>First we&apos;ll create an actor that is responsible for downloading and parsing a URL, and then messaging the results back to its parent:</p>
<pre><code class="language-csharp">public class ScrapeActor : ReceiveActor
{
    public ScrapeActor(IActorRef parent)
    {
        Receive&lt;Scrape&gt;(msg =&gt; OnReceiveScrape(msg));
        Receive&lt;ScrapeResult&gt;(msg =&gt; parent.Forward(msg));
    }

    private void OnReceiveScrape(Scrape msg)
    {
        var config = Configuration.Default.WithDefaultLoader();

        BrowsingContext.New(config).OpenAsync(msg.Url).ContinueWith(request =&gt;
        {
            var document = request.Result;
            var links = document.Links
                                .Select(link =&gt; link.GetAttribute(&quot;href&quot;))
                                .ToList();

            return new ScrapeResult(document.Url, document.Title, links);
        }, TaskContinuationOptions.ExecuteSynchronously).PipeTo(Self);
    }
}
</code></pre>
<p>The downloading and parsing here is being done by the <a href="https://anglesharp.github.io/">AngleSharp</a> library which makes it really easy. When a message comes in saying to scrape a URL it downloads and parses that URL, and then when the result finishes it forwards that up the chain. Because this actor is focused on just doing one task at a time, we can potentially spin up as many of this concurrently as we need to speed up processing.</p>
<h2 id="coordinatoractor">CoordinatorActor</h2>
<p>With that actor ready to go, next we&apos;ll set up a coordinator actor that manages a pool of <code>ScrapeActors</code>:</p>
<pre><code class="language-csharp">public class CoordinatorActor : ReceiveActor
{
    private readonly IActorRef _crawlers;

    public CoordinatorActor()
    {
        _crawlers = Context.ActorOf(
            Props.Create(() =&gt; new ScrapeActor(Self)).WithRouter(new SmallestMailboxPool(10)));

        Receive&lt;Scrape&gt;(msg =&gt; _crawlers.Tell(msg));
        Receive&lt;ScrapeResult&gt;(msg =&gt; OnReceiveScrapeResult(msg));
    }

    private void OnReceiveScrapeResult(ScrapeResult result)
    {
        foreach (var url in result.LinkedUrls)
            _crawlers.Tell(new Scrape(url));

        if (!string.IsNullOrWhiteSpace(result.Title))
            Context.System.EventStream.Publish(result);
    }
}
</code></pre>
<p>As scrape requests come in it sends those down to the worker in its pool with the smallest mailbox. This is where the actor model really shines, since you can easily adjust the size of the pool or the routing algorithm without actually making any real code changes. You could also load this entirely from configuration files to avoid code changes at all.</p>
<p>As results come in it sends new scrape requests back to the pool of workers in order to keep the crawling going, and then publishes the results to the event stream, which is a built-in publish/subscribe channel in Akka.NET.</p>
<h2 id="resultdispatchactor">ResultDispatchActor</h2>
<p>Finally we&apos;ll create a small actor that will act as a bridge between the actor system and a view model driving the app&apos;s behavior (this will be defined shortly):</p>
<pre><code class="language-csharp">public class ResultDispatchActor : ReceiveActor
{
    public ResultDispatchActor(MainViewModel viewModel) =&gt;
        Receive&lt;ScrapeResult&gt;(result =&gt; viewModel.Results.Add(result));
}
</code></pre>
<p>As results are received they are appended to the view model&apos;s list of results. One interesting thing to note here is that since an actor only processes one message at a time, you eliminate a lot of collection concurrency issues you might have to worry about otherwise.</p>
<h2 id="startingthesystem">Starting The System</h2>
<p>Now that our actors are defined we just need to compose them into an actual actor system. For this all we&apos;ll just do that statically when the app starts:</p>
<pre><code class="language-csharp">public static class CrawlingSystem
{
    private static readonly ActorSystem _system;
    private static readonly IActorRef _coordinator;

    static CrawlingSystem()
    {
        _system = ActorSystem.Create(&quot;crawling-system&quot;);
        _coordinator = _system.ActorOf(Props.Create&lt;CoordinatorActor&gt;(), &quot;coordinator&quot;);
    }
    
    public static void StartCrawling(string url, MainViewModel viewModel)
    {
        var props = Props.Create(() =&gt; new ResultDispatchActor(viewModel));
        var dispatcher = _system.ActorOf(props);

        _system.EventStream.Subscribe(dispatcher, typeof(ScrapeResult));

        _coordinator.Tell(new Scrape(url));
    }
}
</code></pre>
<p>Here we expose a <code>StartCrawling</code> method that takes in a URL and a view model, creates a bridge actor for that view model, and subscribes it to the stream of results.</p>
<h1 id="theapp">The App</h1>
<p>Now let&apos;s actually plug this into an app. First, let&apos;s define that <code>MainViewModel</code> class:</p>
<pre><code class="language-csharp">public class MainViewModel : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;
    public ObservableCollection&lt;ScrapeResult&gt; Results { get; } = new ObservableCollection&lt;ScrapeResult&gt;();
    public ICommand StartCrawlingCommand { get; }

    private string _url;
    public string Url
    {
        get { return _url; }
        set
        {
            _url = value;
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Url)));
        }
    }

    public MainViewModel() =&gt;
        StartCrawlingCommand = new Command(() =&gt; CrawlingSystem.StartCrawling(_url, this));
}
</code></pre>
<p>The view model exposes a URL property that can be bound to an entry field, a collection of results, and a command that initiates crawling for the given URL.</p>
<p>Now we can define the UI in XAML:</p>
<pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&lt;ContentPage xmlns=&quot;http://xamarin.com/schemas/2014/forms&quot; 
    xmlns:x=&quot;http://schemas.microsoft.com/winfx/2009/xaml&quot; 
    xmlns:akka=&quot;clr-namespace:MobileCrawler.CSharp;assembly=MobileCrawler.CSharp&quot;
    x:Class=&quot;MobileCrawler.CSharp.MainPage&quot;&gt;
    &lt;ContentPage.Content&gt;
        &lt;StackLayout Padding=&quot;15, 30, 15, 15&quot; Spacing=&quot;10&quot;&gt;
            &lt;StackLayout&gt;
                &lt;Entry x:Name=&quot;Query&quot; Text=&quot;{Binding Url}&quot; 
                       HorizontalOptions=&quot;FillAndExpand&quot; Keyboard=&quot;Url&quot;
                       Placeholder=&quot;Enter a URL&quot; HeightRequest=&quot;40&quot; FontSize=&quot;20&quot;&gt;
                    &lt;Entry.Behaviors&gt;
                        &lt;akka:EntryCompletedBehavior Command=&quot;{Binding StartCrawlingCommand}&quot; /&gt;
                    &lt;/Entry.Behaviors&gt;
                &lt;/Entry&gt;
            &lt;/StackLayout&gt;

            &lt;ListView ItemsSource=&quot;{Binding Results}&quot;&gt;
                &lt;ListView.ItemTemplate&gt;
                    &lt;DataTemplate&gt;
                        &lt;TextCell Text=&quot;{Binding Title}&quot; Detail=&quot;{Binding Url}&quot; /&gt;
                    &lt;/DataTemplate&gt;
                &lt;/ListView.ItemTemplate&gt;

                &lt;ListView.Header&gt;
                    &lt;StackLayout Orientation=&quot;Horizontal&quot; Padding=&quot;10&quot; Spacing=&quot;10&quot;&gt;
                        &lt;Label Text=&quot;{Binding Results.Count}&quot; /&gt;
                        &lt;Label Text=&quot;links crawled&quot; /&gt;
                    &lt;/StackLayout&gt;
                &lt;/ListView.Header&gt;
            &lt;/ListView&gt;
        &lt;/StackLayout&gt;
    &lt;/ContentPage.Content&gt;
&lt;/ContentPage&gt;
</code></pre>
<p>In the code-behind all we need to do is set up the view model:</p>
<pre><code class="language-csharp">public partial class MainPage : ContentPage
{
    public MainPage()
    {
        InitializeComponent();

        BindingContext = new MainViewModel();
    }
}
</code></pre>
<p>That&apos;s all we need - the view model and binding take care of the rest. Let&apos;s give it a shot:</p>
<p><img src="https://gregshackles.com/content/images/2017/10/xamarinakkaios.png" alt="iOS app scraping my site" loading="lazy"></p>
<p>Not bad! This obviously only scratches the surface of what Akka can do, but I&apos;m pretty excited to be able to start leveraging in Xamarin applications going forward. The full app can be found in my <a href="https://github.com/gshackles/akka-samples"><code>akka-samples</code> repository on GitHub</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A Mobile DevOps Retrospective Part I: 150+ Apps Later]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I recently had the pleasure to write a series of guest posts for the Microsoft&apos;s Mobile Center blog about mobile DevOps, and our experiences over the years at Olo. I&apos;m happy to say that part one of the series is now out, with parts two and</p>]]></description><link>https://gregshackles.com/a-mobile-devops-retrospective-part-i-150-apps-later/</link><guid isPermaLink="false">61ce48a0437e8200017d4158</guid><category><![CDATA[Xamarin]]></category><category><![CDATA[Mobile Center]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Tue, 26 Sep 2017 16:18:30 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I recently had the pleasure to write a series of guest posts for the Microsoft&apos;s Mobile Center blog about mobile DevOps, and our experiences over the years at Olo. I&apos;m happy to say that part one of the series is now out, with parts two and three coming soon. Enjoy!</p>
<p><a href="http://aka.ms/Ocueki">A Mobile DevOps Retrospective Part I: 150+ Apps Later</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building a Voice-Driven TV Remote - Part 8: Tracking Performance with Application Insights]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This is part eight of the Building a Voice-Driven TV Remote series:</p>
<ol>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-1-the-data/">Getting The Data</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-2-adding-search/">Adding Search</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-3-the-device-api/">The Device API</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-4-some-basic-alexa-commands/">Some Basic Alexa Commands</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-5-adding-a-search-command/">Adding a Listings Search Command</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-6-starting-to-migrate-from-http-to-mqtt/">Starting to Migrate from HTTP to MQTT</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-7-finishing-the-migration-from-http-to-mqtt/">Finishing the Migration from HTTP to MQTT</a></li>
<li><strong>Tracking Performance with Application Insights</strong></li>
</ol>
<hr>
<p>In the</p>]]></description><link>https://gregshackles.com/building-a-voice-driven-tv-remote-part-8-tracking-performance-with-application-insights/</link><guid isPermaLink="false">61ce48a0437e8200017d4157</guid><category><![CDATA[Azure]]></category><category><![CDATA[F#]]></category><category><![CDATA[Echo]]></category><category><![CDATA[Speech Recognition]]></category><category><![CDATA[Serverless]]></category><category><![CDATA[Remote]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Sun, 13 Aug 2017 18:21:35 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is part eight of the Building a Voice-Driven TV Remote series:</p>
<ol>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-1-the-data/">Getting The Data</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-2-adding-search/">Adding Search</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-3-the-device-api/">The Device API</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-4-some-basic-alexa-commands/">Some Basic Alexa Commands</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-5-adding-a-search-command/">Adding a Listings Search Command</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-6-starting-to-migrate-from-http-to-mqtt/">Starting to Migrate from HTTP to MQTT</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-7-finishing-the-migration-from-http-to-mqtt/">Finishing the Migration from HTTP to MQTT</a></li>
<li><strong>Tracking Performance with Application Insights</strong></li>
</ol>
<hr>
<p>In the last part I vastly improved the performance of the app by switching from HTTP to MQTT. With that in place, the next step I wanted to take was seeing where the remaining time was being spent. How quick are calls to the search service or the database? How long are the nightly downloads and imports taking? To answer these questions I set out to add Application Insights to the app to really get some visibility.</p>
<h1 id="initialsetup">Initial Setup</h1>
<p>The basic setup for Application Insights is as easy as it gets. After you create a new Insights app and get the instrumentation key from it, all you need to do is add an app setting to your function app named <code>APPINSIGHTS_INSTRUMENTATIONKEY</code> and you&apos;ll start getting metrics reporting to Insights automatically. With that in place, for example, I can easily see the execution history for the <code>DownloadLineup</code> function:</p>
<p><img src="https://gregshackles.com/content/images/2017/08/perf1.png" alt="DownloadLineup performance" loading="lazy"></p>
<p>Each morning that is taking around 6-8s seconds to run, which is perfectly fine to me.</p>
<h1 id="trackingsuboperations">Tracking Sub-Operations</h1>
<p>The main thing I wanted to get insight into is where the <code>RemoteSkill</code> function was spending its time, to figure out how I can make that as fast as possible. I could already see the overall function durations using the same method as above, but that doesn&apos;t help me see how it spends its time during that duration.</p>
<p>To add this in, I pulled in the <code>Microsoft.ApplicationInsights</code> NuGet package, which allows me to interact with Application Insights programmatically. First, I added a new file named <code>telemetry.fsx</code> to the function:</p>
<pre><code class="language-fsharp">module Telemetry

open System
open Microsoft.ApplicationInsights
open Microsoft.ApplicationInsights.Extensibility
open Microsoft.ApplicationInsights.DataContracts

let private instrumentationKey = Environment.GetEnvironmentVariable(&quot;APPINSIGHTS_INSTRUMENTATIONKEY&quot;)
let private telemetryClient = TelemetryClient(InstrumentationKey = instrumentationKey)

let setOperationId operationId =
    telemetryClient.Context.Operation.Id &lt;- operationId

let startOperation (name:string) = 
    telemetryClient.StartOperation&lt;DependencyTelemetry&gt;(name)
</code></pre>
<p>Here I create a <code>TelemetryClient</code> and expose a couple utility methods: one to set the overall operation ID so that all sub-operations will be tracked within the overall request operation, and another to start a new sub-operation. The operation is an <code>IDisposable</code>, so the duration for it lasts from when it&apos;s created until it&apos;s disposed.</p>
<p>With that in place, all I needed to do was start sprinkling calls to it all around the function implementation. For example, to track the calls to search service in <code>search.fsx</code>:</p>
<pre><code class="language-fsharp">let private searchShows request =
    use operation = Telemetry.startOperation &quot;ShowSearch&quot;
    search &quot;shows&quot; request |&gt; ShowSearchResults.Parse

let private searchChannels request =
    use operation = Telemetry.startOperation &quot;ChannelSearch&quot;
    search &quot;channels&quot; request |&gt; ChannelSearchResults.Parse
</code></pre>
<p>Or to track executing a command in <code>commands.fsx</code>:</p>
<pre><code class="language-fsharp">let executeCommand commandSlug = 
    use operation = Telemetry.startOperation &quot;ExecuteCommand&quot;
    ...
</code></pre>
<p>With a bunch of these in place I started executing commands and searches via Alexa to see how it looked. Here&apos;s an example of how one request stacked up:</p>
<p><img src="https://gregshackles.com/content/images/2017/08/perf2.png" alt="RemoteSkill performance" loading="lazy"></p>
<p>The vertical order can get a little mixed up when operations start really close together, but you can still get a good idea of the flow here. In this case the whole request took 1385ms, 1024ms of which was spent executing the commands to change the channel. This is somewhat expected, since I had added a 250ms sleep in between each command execution to avoid overloading my cable box.</p>
<p>One obvious performance improvement here in terms of the function itself would be to allow for sending multiple commands at once via IoT Hub, and do the pauses on the Raspberry Pi instead of in the function. This would also be a nice cost optimization since with Azure Functions you pay for the time your function is spent running, and most of the time here is actually spent sleeping.</p>
<p>The calls to the search service are also a little bit slower than I was expecting so I&apos;ll also be looking into how to tweak that as well.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Looking Ahead to Xamarin.Forms 3.0]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>It&apos;s been a few years now since Xamarin.Forms was released into the world, and it continues to be a popular and evolving framework choice for Xamarin developers. Later this year Microsoft is planning to release Xamarin.Forms 3.0, its third major release of the framework, which</p>]]></description><link>https://gregshackles.com/looking-ahead-to-xamarin-forms-3-0/</link><guid isPermaLink="false">61ce48a0437e8200017d4156</guid><category><![CDATA[Visual Studio]]></category><category><![CDATA[Xamarin]]></category><category><![CDATA[Xamarin.Forms]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Fri, 11 Aug 2017 13:21:31 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>It&apos;s been a few years now since Xamarin.Forms was released into the world, and it continues to be a popular and evolving framework choice for Xamarin developers. Later this year Microsoft is planning to release Xamarin.Forms 3.0, its third major release of the framework, which is slated to ship with a lot of exciting features and improvements. While it certainly won&apos;t be comprehensive, as the feature set is large and still in motion, in this article I&apos;ll walk through some of the highlights of what&apos;s coming later this year for Xamarin.Forms developers.</p>
<p>Read the rest over at <a href="https://visualstudiomagazine.com/articles/2017/08/01/xamarinforms3_0.aspx">Visual Studio Magazine</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building a Voice-Driven TV Remote - Part 7: Finishing the Migration from HTTP to MQTT]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This is part seven of the Building a Voice-Driven TV Remote series:</p>
<ol>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-1-the-data/">Getting The Data</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-2-adding-search/">Adding Search</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-3-the-device-api/">The Device API</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-4-some-basic-alexa-commands/">Some Basic Alexa Commands</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-5-adding-a-search-command/">Adding a Listings Search Command</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-6-starting-to-migrate-from-http-to-mqtt/">Starting to Migrate from HTTP to MQTT</a></li>
<li><strong>Finishing the Migration from HTTP to MQTT</strong></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-8-tracking-performance-with-application-insights/">Tracking Performance with Application Insights</a></li>
</ol>
<hr>
<p>In the</p>]]></description><link>https://gregshackles.com/building-a-voice-driven-tv-remote-part-7-finishing-the-migration-from-http-to-mqtt/</link><guid isPermaLink="false">61ce48a0437e8200017d4155</guid><category><![CDATA[Azure]]></category><category><![CDATA[F#]]></category><category><![CDATA[Echo]]></category><category><![CDATA[Speech Recognition]]></category><category><![CDATA[Serverless]]></category><category><![CDATA[Remote]]></category><dc:creator><![CDATA[Greg Shackles]]></dc:creator><pubDate>Sun, 06 Aug 2017 23:18:11 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This is part seven of the Building a Voice-Driven TV Remote series:</p>
<ol>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-1-the-data/">Getting The Data</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-2-adding-search/">Adding Search</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-3-the-device-api/">The Device API</a></li>
<li><a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-4-some-basic-alexa-commands/">Some Basic Alexa Commands</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-5-adding-a-search-command/">Adding a Listings Search Command</a></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-6-starting-to-migrate-from-http-to-mqtt/">Starting to Migrate from HTTP to MQTT</a></li>
<li><strong>Finishing the Migration from HTTP to MQTT</strong></li>
<li><a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-8-tracking-performance-with-application-insights/">Tracking Performance with Application Insights</a></li>
</ol>
<hr>
<p>In the <a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-6-starting-to-migrate-from-http-to-mqtt/">last post of the series</a> I introduced a MQTT bridge to connect my Harmony system with Azure IoT, so now it&apos;s time to switch things over and remove the need for my functions to make API calls into my house.</p>
<h1 id="switchingcommands">Switching Commands</h1>
<p>Naturally Microsoft has a handy package available that makes communicating with Azure IoT Hubs a breeze called <a href="https://www.nuget.org/packages/Microsoft.Azure.Devices">Microsoft.Azure.Devices</a>. In the implementation of <code>RemoteSkill</code>, recall that the <code>commands.fsx</code> file had this as the implementation for sending a command to the device:</p>
<pre><code class="language-fsharp">let private makeRequest method urlPath =
    let url = sprintf &quot;%s/%s&quot; (Environment.GetEnvironmentVariable(&quot;HarmonyApiUrlBase&quot;)) urlPath
    let authHeader = &quot;Authorization&quot;, (Environment.GetEnvironmentVariable(&quot;HarmonyApiKey&quot;))

    Http.RequestString(url, httpMethod = method, headers = [authHeader])

let executeCommand commandSlug = sprintf &quot;commands/%s&quot; commandSlug |&gt; makeRequest &quot;POST&quot; |&gt; ignore
</code></pre>
<p>This meant that every command needed to assume all of the overhead of HTTP calls into my house. In addition to the fact that this meant my house needed a public API, this was also a performance killer since a command here effectively maps to a button pressed on a remote. Entering a channel number would result in four commands being sent - three for the digits and then one to hit enter.</p>
<p>Here&apos;s an updated implementation that sends the command through the IoT Hub:</p>
<pre><code class="language-fsharp">let serviceClient = ServiceClient.CreateFromConnectionString (Environment.GetEnvironmentVariable(&quot;IoTHubConnectionString&quot;))

let executeCommand commandSlug = 
    async {
        sprintf &quot;harmony-api/hubs/living-room/command;%s&quot; commandSlug
        |&gt; Encoding.ASCII.GetBytes
        |&gt; fun bytes -&gt; new Message(bytes)
        |&gt; fun message -&gt; serviceClient.SendAsync(&quot;harmony-bridge&quot;, message)
        |&gt; Async.AwaitIAsyncResult 
        |&gt; Async.Ignore
        |&gt; ignore

        do! Async.Sleep 250
    } |&gt; Async.RunSynchronously
</code></pre>
<p>Since the signature of <code>executeCommand</code> remains unchanged, that&apos;s all that actually has to change! The 250ms sleep in between commands here is to make sure there&apos;s a little space in between commands sent one after another. While testing this I quickly found that it was easy to crash my cable box by sending requests too quickly:</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Well, today&#x2019;s accomplishment is crashing my FiOS cable box by sending commands to it via an Azure IoT Hub <a href="https://t.co/eBdg4r1nLa">pic.twitter.com/eBdg4r1nLa</a></p>&#x2014; Greg Shackles (@gshackles) <a href="https://twitter.com/gshackles/status/893943378181795840">August 5, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>With just this one method implementation change, all commands being sent into my house are now being routed through IoT Hub.</p>
<h1 id="switchingqueries">Switching Queries</h1>
<p>Switching command execution over to MQTT gave a real noticeable performance boost, but it didn&apos;t totally eliminate the need for a public-facing API that the functions could access. Also in the <code>commands.fsx</code> file was one other method that would query the API for active commands for the current activity of my media center:</p>
<pre><code class="language-fsharp">let getCommand (label: string) =
    makeRequest &quot;GET&quot; &quot;commands&quot;
    |&gt; CommandsResponse.Parse
    |&gt; fun res -&gt; res.Commands
    |&gt; Seq.tryFind (fun command -&gt; command.Label.ToLowerInvariant() = label.ToLowerInvariant())
</code></pre>
<p>This one was a little trickier to switch to MQTT since it relies on state that changes depending on the active activity (where activity can be watching TV, AppleTV, Fire Stick, etc). To solve for this I decided to add some more persistence to the application, where every time the activity changed it would update the persistence to have the latest set of commands available to use.</p>
<h2 id="addingthedatabase">Adding the Database</h2>
<p>I thought about introducing something like Redis or CosmosDB for it, but that ended up being at odds with my goal of keeping this thing as cheap as possible. I already have a SQL instance that costs around $5 per month, while adding Redis would be another $16, and CosmosDB another $24. Based on price, adding another table in SQL Server was the obvious answer. I went ahead and created this simple table:</p>
<pre><code class="language-sql">CREATE TABLE AvailableCommand(
	AvailableCommandId int IDENTITY(1,1) NOT NULL,
	Name nvarchar(32) NOT NULL,
	Slug nvarchar(16) NOT NULL,
	Label nvarchar(32) NOT NULL
)
</code></pre>
<p>There&apos;s no real need to keep any historic data, so each time the activity changes I&apos;ll just blow away the old data and replace it with the new ones. There are realistically only going to be up to 20-30 commands for each activity anyway.</p>
<h2 id="storingthedata">Storing the Data</h2>
<p>With the database table in place, the next task was to update the IoT bridge I created in the last post to subscribe to the MQTT topic that gets notified when the activity changes. When it changes, it can make the HTTP API call locally (so no need for this to ever be exposed externally to the Raspberry Pi itself or the local network) to get the new set of commands, and persist that to Azure.</p>
<p>To do that I pulled in a couple npm packages to simplify HTTP calls and database access:</p>
<pre><code>npm i --save request-promise tedious 
</code></pre>
<p>Next I&apos;ll add a function that, given a list of commands, updates the SQL database:</p>
<pre><code class="language-javascript">function updateAvailableCommands(commands) {
    const connection = new tedious.Connection(config.sqlConfig);
    connection.on(&apos;connect&apos;, err =&gt; {
        if (err) {
            console.error(&apos;Error connecting to SQL&apos;, err);
            return;
        }

        const truncateRequest = new tedious.Request(&quot;truncate table AvailableCommand&quot;, err =&gt; {
            if (err) {
                console.error(&apos;Error truncating table&apos;, err);
            }
        });

        truncateRequest.on(&apos;requestCompleted&apos;, () =&gt; {
            const updateCommands = connection.newBulkLoad(&apos;AvailableCommand&apos;, (err, rowCount) =&gt; {
                if (err) {
                    console.error(&apos;Error inserting commands&apos;, err);
                } else {
                    console.log(`Inserted ${rowCount} command(s)`);
                }
            });
            updateCommands.addColumn(&apos;Name&apos;, tedious.TYPES.NVarChar, { nullable: false });
            updateCommands.addColumn(&apos;Slug&apos;, tedious.TYPES.NVarChar, { nullable: false });
            updateCommands.addColumn(&apos;Label&apos;, tedious.TYPES.NVarChar, { nullable: false });

            commands.forEach(command =&gt; 
                updateCommands.addRow({ Name: command.name, 
                                        Slug: command.slug, 
                                        Label: command.label }));

            connection.execBulkLoad(updateCommands);
        })

        connection.execSql(truncateRequest);
    }); 
}
</code></pre>
<p>Most of the code here is either JavaScript ceremony or error logging, so there&apos;s not too much going on. It simply truncates the existing data in the table and bulk loads the new data in.</p>
<p>Lastly, I just need to subscribe to the topic and update the commands when the activity changes:</p>
<pre><code class="language-javascript">broker.on(&apos;publish&apos;, packet =&gt; {
    if (packet.topic !== &apos;harmony-api/hubs/living-room/current_activity&apos;) {
        return;
    }

    request({ url: &apos;http://localhost:8282/hubs/living-room/commands&apos;, json: true })
        .then(res =&gt; updateAvailableCommands(res.commands));
});
</code></pre>
<p>Now whenever the activity changes the SQL database will be updated with the new set of available commands.</p>
<h2 id="updatingthefunction">Updating the Function</h2>
<p>Now that I&apos;ve got a data store in place with a list of the available commands, I just need to rip out that last API call from <code>commands.fsx</code> and replace it with a database read:</p>
<pre><code class="language-fsharp">[&lt;Literal&gt;]
let configFile = &quot;D:\\home\\site\\wwwroot\\RemoteSkill\\app.config&quot;

let getCommand (label: string) =
    use cmd = new SqlCommandProvider&lt;&quot;SELECT Slug FROM AvailableCommand WHERE Label=@label&quot;, &quot;name=TVListings&quot;, ConfigFile=configFile, SingleRow=true&gt;()
    cmd.Execute(label = label)
</code></pre>
<p>Thanks to the beauty of the SQL type provider that&apos;s all that&apos;s actually needed to read out the command in a typesafe way. This change does actually change the signature of <code>getCommand</code> over the previous version, though, in that it only returns the slug instead of the full command object. That just means I need to tweak the <code>handleDirectCommand</code> method in <code>run.fsx</code> to just expect the slug, which is all it cared about anyway:</p>
<pre><code class="language-fsharp">let handleDirectCommand (intent: Intent) =
    match (Commands.getCommand intent.Slots.[&quot;command&quot;].Value) with
    | Some(slug) -&gt;
        Commands.executeCommand slug
        buildResponse &quot;OK&quot; true
    | None -&gt; buildResponse &quot;Sorry, that command is not available right now&quot; true
</code></pre>
<p>And that&apos;s it! I no longer have any need to make direct outbound API calls from the functions into my house, so I was able to shut down the NGINX site altogether, as well as the No-IP job and Let&apos;s Encrypt...pretty much everything I&apos;d done in <a href="http://gregshackles.com/building-a-voice-driven-tv-remote-part-3-the-device-api/">part 3 of this series</a>.</p>
<p>There&apos;s still some improvements that can be made, but the performance improvement by switching to MQTT has been massive and it makes this skill so much more useful. Here&apos;s a little video of me channel surfing on the current version:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/DoJ0NOu_rrQ" frameborder="0" allowfullscreen></iframe>
<hr>
<p>Next post in series: <a href="https://gregshackles.com/building-a-voice-driven-tv-remote-part-8-tracking-performance-with-application-insights/">Tracking Performance with Application Insights</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>