TL;DR: Does the Arc A310 have any important advantage over recent Intel low-power CPUs with integrated graphics (e.g. N100/N150/N350/N355) specifically for use with Jellyfin, in terms of the number of streams it can transcode simultaneously or something like that?
Even if they do differ, is it something I would notice in a household context (e.g. with probably never more than 4 users at a time), or would the discrete GPU just be overkill?
context, if you need it
My Jellyfin is currently running in a VM on a Proxmox server with a Ryzen 5 3600 CPU and Vega 56 discrete GPU that draws a lot of power unnecessarily and apparently isn’t recommended for Jellyfin transcoding due to lack of encoder quality. I’m thinking about either replacing the GPU with an Arc A310 for ~$100 or replacing the whole CPU/mobo/GPU with some kind of low-power Intel ITX board (the kind designed for routers or NASs, with a soldered-on N100 or similar) for ~$200. I’m leaning towards the latter because it would use less power, be simpler to set up (since, as I understand it, integrated GPU functions are always available instead of needing to be passed through and dedicated to a single VM/container) more versatile in the future (e.g. as a NAS or router), and be a whole additional system, freeing up the AMD hardware for some other use.
But is the N100 option just strictly equal or better for Jellyfin, or is there some other performance trade-off?
(BTW, I know the Arc uses Intel Quick Sync Video version 9 while the N100 uses version 8, with the difference between them being that the newer version supports 8K 10-bit AV1 hardware encoding. I’m not going to be encoding 8K any time in the foreseeable future, so I don’t care about that.)
