• Filtering entity collections with LLBLGen Pro ORM

    Filtering entity collections with LLBLGen Pro ORM

    In the previous post, we covered the topic of fetching nested entities (prefetching) and in this one we’ll go over filtering entity collections. When talking about databases, filtering is one of the most commonly used operations. In LLBLGen, filtering means using predicate buckets.

    We’ll re-use the same DB schema throughout the whole LLBLGen series (made using QuickDBD):

    We’ll cover two ways of filtering in this post. In the first one we’ll filter by a field on the main entity of the collection (we’ll filter products by price). In the second one we’ll filter by fields of related entities (for instance, let’s filter all orders created by customers named “Joe”).

    Please note that in addition to a predicate expression, for the second filter we also had to define a relation. If we didn’t do it, we’d get a runtime error because we would have a mismatch between the entity collection type and the entity type of the field that we’re filtering by.

    In the following posts, we’ll cover additional filtering scenarios which are maybe not as common as this one, but you’ll still bump into them from time to time.

    I hope this was helpful and if you have any questions, please drop a comment.

     

  • Fetching nested entities with LLBLGen Pro ORM

    Fetching nested entities with LLBLGen Pro ORM

    In this post, I’ll quickly explain how to fetch “nested” (related) entities with LLBLGen Pro ORM framework. In LLBLGen this operation is called prefetching. By adding prefetch paths, we’re telling LLBLGen which related entities we want it to fetch from the database together with our main entity. These prefetched entities will be easily accessible as properties (and/or) sub-properties of the main object.

    Let’s take the following DB schema as an example (diagram made using QuickDBD):

    The code below contains two simple examples of prefetching based on two different starting points (with OrderEntity and then CustomerEntity as the starting entity).

    In the first example we’re prefetching multiple entity types related to the OrderEntity (CustomerEntity and ProductEntity). In the second example we’re starting with the CustomerEntity, then we’re prefetching all the OrderEntity objects and then multiple sub-entities related to the already prefetched OrderEntity (ProductEntity and OrderStatusEntity).

    I envisioned this as a short series of blog posts focusing on fetching and filtering concepts which you’ll find yourself using every day wiht LLBLGen. In the following posts we’ll cover a couple of different filtering techniques depending on what you need to filter your entity collections by.

    Hope you found the post helpful. If you have any questions, drop a comment bellow. Cheers!

  • Contributing back – InfluxData.Net NuGet

    Contributing back – InfluxData.Net NuGet

    On project that I currently work on in Dovetail, we’re using the InfluxDb time-series database. And it’s a really cool thing to be working with.

    Recently we had to move to a newer version of InfluxDb and the library that we were using at the time became a bit stale and didn’t support the newest version of InfluxDb (v0.9.6) which we had to move to. So, partially out of necessity, and partially “because I can” and wan to, I decided to update the currently existing library and then re-release it as a NuGet package under a slightly different name.

    *drumroll* – the code for the InfluxData.Net NuGet is available on Github. It already had a few code contributions which is quite exciting. The whole codebase has been refactored, additional tests written and I believe that the codebase is now quite stable. I plan on expanding the library API to support most of the stuff that InfluxDb provides, and I also plan on implementing the API’s for other InfluxData products such as Kapacitor and Telegraf.

    If people keep using it in the future, I’ll consider it a success and it will make me happy. :)

  • Loading embedded resource images into PdfSharp/MigraDoc for non-web projects layers

    Loading embedded resource images into PdfSharp/MigraDoc for non-web projects layers

    There are some cases (for example on Azure) when you can’t really rely on ASP’s Web-layer file resources or “server paths” and file management libraries to access your static resource files. For example if you have a business layer which doesn’t really have a direct access to another layer’s files but you need to generate a PDF using PdfSharp/MigraDoc inside that layer.

    How it can be done is – you need to add an image file to your layer, then right-click on it and go to it’s properties. There you need to set the “Build Action” to Embedded Resource, and the “Copy to Output Directory” to Copy if newer. After that, use these three simple methods to load up the image from the assembly into a Stream, convert it to a byte[] array and then encode it as a base64 string.

    By using the base64 representation of the image, you will be able to embed it into your PDF document.

    Cheers!

  • Downloading/streaming Azure Storage private container blobs to AngularJS through .Net WebAPI

    Downloading/streaming Azure Storage private container blobs to AngularJS through .Net WebAPI

    When our Azure storage contains files that are meant to be publically accessible, it’s pretty trivial to deliver them to the end-user. We can either embed such items (e.g. images) or simply add links which point to them (e.g. pdfs) because Azure provides direct links to them. But what happens when these files contain sensitive data that is not meant for just anyone? Perhaps some kind of reports?

    Well, it gets a bit more complicated.. Since these files don’t have publically accessible URI’s any more, there are several steps which we need to go through:

    • Authenticate against Azure using SDK
    • Load the file into MemoryStream
    • Deliver the stream to the client (browser)
    • Convert the byte array into an actual file on the client-side and simulate “downloading”

    Since we’ll need more than just the MemoryStream, we’ll wrap it together with the file metadata into a model object.

    We will use AzureProvider class to authenticate against Azure, download the file from Azure and to create the model object.

    Our actual controller will inherit from this BaseApi class which contains a custom IHttpActionResult method which we can name AzureBlobOk. This is something pretty reusable so it’s good to have it at hand in your base class. What it does is it sets up all the content headers and it attaches the stream as the response content payload . It also returns HTTP status 200 which means OK – everything went fine.

    The actual controller is pretty simple..

    On the client side, we’ll need the following service to actually convert the byte array that we got from the API into something meaningful. I tried various approaches, but in the end decided to use FileSaver.js which “implements the HTML5 W3C saveAs() FileSaver interface in browsers that do not natively support it”. What it will do is turn the byte array into an actual file and prompt the user to download it.

    This service can easily be consumed by injecting it into your AngularJS controllers and calling the .getBlob() function which will do all the heavy lifting for you.

    Hope this helped, enjoy! :)

  • PdfSharp/MigraDoc to Azure Storage in-memory upload

    PdfSharp/MigraDoc to Azure Storage in-memory upload

    From my (somewhat limited) experience PdfSharp/MigraDoc seems like a pretty fine and powerful library for creating .pdf documents, but it’s documentation – not so much. It’s a bit all over the place and with multiple different NuGet versions/builds and outdated StackOverflow code samples not really helping the situation.

    However, creating a .pdf document in-memory and uploading it straight to Azure is not really that complicated. When might this be useful? For example when you need to generate a report but instead of immediately giving it to the user it just needs to get stored for later access.

    Magic word we’re looking for is MemoryStream. We’ll use two classes – one which will take a MemoryStream and upload it to Azure (AzureProvider.cs), and another one which will create a very simple MigraDoc document (ReportProvider.cs) which you can then build upon and then feed that document to the AzureProvider in the form of MemoryStream.

    The code is pretty straightforward and looks like this:

    Somewhat related – in the next post I’ll explain how to stream a file from Azure Storage private container through .net WebAPI to an AngularJS app.

Back to top