Damus
Frederik Handberg profile picture
Frederik Handberg
@frederikhandberg

23 🇩🇰 Studying for a degree in Software Engineering while building fun projects and working freelance as a News Photographer 📷

I share my software projects, photos and videos from my work as a news photographer, and progress updates as I learn to sew garments.

Basically, I just write about my hobbies.

frederikhandberg.com

Relays (4)
  • wss://relay.damus.io – read & write
  • wss://relay.nostr.band – read & write
  • wss://relay.snort.social – read & write
  • wss://nostr.oxtr.dev – read & write

Recent Notes

Frederik Handberg · 2d
Been working on implementing undo/redo functionality in my notes app. The `UndoManager` API makes this easy by simply calling `registerUndo` and then you can use `setActionName` to give it a name tha...
Frederik Handberg profile picture
This functionality is now supported by most block types, but still need to integrate the `UndoManager` in a few more blocks like headings and videos.

Afterwards, I should make undo/redo work in the visual canvas.

I also need to test thoroughly to make sure it only performs the undo action for the open tab, and not in other tabs and/or windows as well.

The `UndoManager` is injected in the ViewModels and each open file (note/canvas) has its own ViewModel, but the shortcut `Cmd+Z` might work globally which would be a problem.

Too tired to test it, but should try it tomorrow…
Frederik Handberg · 2d
Btw. make sure to use `@Environment(\.undoManager)` in the SwiftUI view. This will automatically make the menu options and shortcuts (`Cmd+Z`) work. https://www.hackingwithswift.com/forums/macos/swiftui-app-life-cycle-undo-redo-and-menu-bar-items/7771
Frederik Handberg · 2d
This functionality is now supported by most block types, but still need to integrate the `UndoManager` in a few more blocks like headings and videos. Afterwards, I should make undo/redo work in the visual canvas. I also need to test thoroughly to make sure it only performs the undo action for the ...
Frederik Handberg profile picture
I will begin #sewing the next t-shirt.

The latest one I made was almost perfect. The shoulder seams should’ve been a bit longer, so I increased that by 2 cm on both sides.

When I have made the perfect t-shirt that fit me, I can move on to the next project. This will most likely be a sweatshirt. I’m already working on the pattern in #CLO3D.
Frederik Handberg profile picture
Been working on copy-paste functionality for my notes app. It may sound a bit strange as you might expect this to just work by default, but because of the way I handle text editing in the app, it does not work out of the box.


My notes app is block-based and because of this, each text block is its own text view. This means, a note document includes multiple `NSTextView`s (one for each text block). This makes copy-pasting from multiple blocks quite complicated as these text blocks are completely separate - they have no connection to each other.

This is where my ‘sequential text view’ hack comes into play. I figured out a way to connect separate `NSTextView`s to allow performing text selections across multiple text views, and then I override the default `copy()` function with custom logic to stitch these text blocks together while keeping the text styling/formatting.

So far, copy-pasting works for text, heading, and list blocks. Tomorrow, I will make it work for table, quote, and code blocks.
1
Frederik Handberg · 4d
This is the text view hack I figured out: https://github.com/frederikhandberg0709/Sequential-Text-Interaction
Frederik Handberg profile picture
Using `Text()` from #SwiftUI has turned out to be a bit of a problem - at least in the visual canvas.

In SwiftUI, the easiest way to zoom in on a view is to use `.scaleEffect()`. This is exactly what I did in the canvas, so when you zoom in on a text object, I would increase the text size by simply using a `.scaleEffect()` modifier. However, this caused an annoying problem where the text would become pixelated, so this approach was just not viable. I had to find a different way...

I tried another approach where I would dynamically scale the font size (multiplying the base font by the zoom scale). This did indeed solve the problem for normal text objects, so the text no longer became pixelated. But this approach introduced it’s own problem for note objects which also use `Text()` from SwiftUI. It caused layout jumping.

If I understand it correctly, the rendering engine called `CoreText` does not scale fonts linearly. For example, a 14pt font scaled to 200% doesn't always take up exactly twice the space of a 7pt font. I noticed that small adjustments to letter-spacing would happen at every fractional font size.

This meant that as the user zooms, a word that barely fit on line one suddenly becomes 0.1 pixels too wide and wraps to line two. This would cause the entire text block to "jump" vertically as lines snap back and forth between different wrap points. It felt broken and looked bad.


The solution ended up being to use AppKit. Unlike SwiftUI’s `Text()`, the `NSTextView` from #AppKit allows us to manipulate the text to a much greater degree.

So this did indeed fix the problem for note objects. BUT... It also introduced a problem since I use `ImageRenderer` to take a snapshot of the canvas, which is then used to display a preview of the canvas in search results.

The thing is that `ImageRenderer` **ONLY** works with SwiftUI. It is simply incapable of capturing an AppKit view like `NSViewRepresentable`. It seems like my only choice is to render an `NSTextView` in the canvas, and then render SwiftUI `Text()` when the app needs to take a snapshot with the `ImageRenderer` API. #dev
1❤️1
Frederik Handberg · 1w
After having spent some time testing this new solution, it seems to work great. I am still missing the updated scaling code in a few block types like tables and list items.
Juraj · 1w
https://github.com/jooray/nalgorithm
Frederik Handberg profile picture
I hear a lot of developers complain about AI having sucked all the fun out of programming.

I can understand that if what they enjoy is writing the code.
But personally I feel the complete opposite. I have never enjoyed programming more than now. I’m so happy that I no longer need to be spending a ton of time on writing the code. Instead, I can use that time on figuring out the best architecture for the system I’m building.

When I work with an LLM, I write Markdown documents that I use to prompt it with detailed descriptions of the features I want the LLM to implement. This could include which APIs I expect it to use. If it’s a rather unknown API, then I copy-paste the definition of it so it knows exactly which parameters it expects and which methods can be called.

I find that if I don’t give the LLM some clear directions, then it will start using deprecated methods and start writing new helper methods even though it could just use existing ones.
In general, I think LLMs can be a bit lazy at times… But they can do excellent work if given a good prompt. #AI #LLM #coding
Frederik Handberg · 1w
Here's a look at V1 of the grid view mode in fullscreen search: https://blossom.primal.net/dd1df49bacfabb7899b2a0cdc94e27284daffa340a956f4c9e7426e18dab0906.mp4 Most of my time has been working on a ...
Frederik Handberg profile picture
I use the `ImageRenderer` to take snapshots of the canvas and it works well. But I do find it super annoying that it only works with SwiftUI. This is a problem as I use `NSViewRepresentable` from AppKit in many parts of the canvas. Especially for text-based blocks in note document objects, such as headings, lists, code blocks, and paragraphs.

So I had to implement a separate case where if the block is rendered because a snapshot is about to be taken, then don’t bother with `NSTextView` and instead just render a `Text` view from SwiftUI.

https://developer.apple.com/documentation/swiftui/imagerenderer