Hello
i have initially opened this problem in juniper communities:
- https://community.juniper.net/discussio ... t-in-eveng
in my attempts to boot the latest vJunosEvo image 24.2R1-S2.4 in my bare metal EVE NG community edition (6.2-04) on AMD 3900XT. As i wrote there, i found out that it uses EFI boot instead of MBR that previous releases use. So after trying several things, i have managed to boot it from cli using qemu-system-x86_64 options:
===============================
qemu-system-x86_64 -enable-kvm -m 8G -cpu host \
> -drive file=virtioa.qcow2,if=virtio,format=qcow2 \
> -drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd \
> -nographic -serial mon:stdio -boot menu=on \
> -smbios type=0,vendor=Bochs,version=Bochs\
> -smbios type=3,manufacturer=Bochs\
> -smbios type=1,manufacturer=Bochs,product=Bochs,serial=chassis_no=0:slot=0:type=1:assembly_id=0x0D20:platform=251:master=0:channelized=no\
> -machine pc-q35-5.2,accel=kvm \
> -smp 4
===============================
and then via manual intervention as it stucks (FS0:, then \EFI\BOOT\BOOTX64.EFI).
I have tried to pass these options in a new instance (at the instance level) and also by creating a new .yml file (or modifying existing ones) without any success although.
I have tried also both cli qemu-system-x86_64 and the .yml files on a VM EVENG community 5.x version that is running on Intel CPU, without any success although as well (even though Juniper's vJunosEvolved libvirt XML file options: https://cdn.juniper.net/software/vJunos ... 1-S2.4.xml?), have been used in .yml or at instance level).
Will there be a support for it soon ?
vJunosEvolved 24.2R1-S2.4 support in EVENG community
Moderator: mike
-
- Posts: 4
- Joined: Mon Feb 10, 2025 2:10 pm
-
- Posts: 5167
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: vJunosEvolved 24.2R1-S2.4 support in EVENG community
Change qemu template vjunosevo23.yml line to:
-machine type=pc,accel=kvm -serial mon:stdio -nographic -smbios type=0,vendor=Bochs,version=Bochs -smbios type=3,manufacturer=Bochs -smbios type=1,manufacturer=Bochs,product=Bochs,serial=chassis_no=0:slot=0:type=1:assembly_id=0x0D20:platform=251:master=0:channelized=no -cpu qemu64 -bios /opt/qemu/share/qemu/OVMF-sata.fd
as you see the UEFI boot is added.
-machine type=pc,accel=kvm -serial mon:stdio -nographic -smbios type=0,vendor=Bochs,version=Bochs -smbios type=3,manufacturer=Bochs -smbios type=1,manufacturer=Bochs,product=Bochs,serial=chassis_no=0:slot=0:type=1:assembly_id=0x0D20:platform=251:master=0:channelized=no -cpu qemu64 -bios /opt/qemu/share/qemu/OVMF-sata.fd
as you see the UEFI boot is added.
You do not have the required permissions to view the files attached to this post.
-
- Posts: 4
- Joined: Mon Feb 10, 2025 2:10 pm
Re: vJunosEvolved 24.2R1-S2.4 support in EVENG community
Thank you Uldis
i have done it and it worked this time. Although i m wondering as based on Juniper's xml file , something different was written:
==================
<ns0:commandline>
<ns0:arg value="-smbios"/>
<ns0:arg value="type=0,vendor=Bochs,version=Bochs"/>
<ns0:arg value="-smbios"/>
<ns0:arg value="type=3,manufacturer=Bochs"/>
<ns0:arg value="-bios"/>
<ns0:arg value="/usr/share/qemu/OVMF.fd"/>
<ns0:arg value="-smbios"/>
<ns0:arg value="type=1,manufacturer=Bochs,product=Bochs,serial=chassis_no=0:slot=0:type=1:assembly_id=0x0D20:platform=251:master=0:channelized=no"/>
</ns0:commandline>
</domain>
==================
The -bios option was suggested to be placed after the options for -smbios type=3, manufacturer=Bochs. Furthermore the value of -bios option is different....in any case it never worked for me.
Furthermore i was also looking the template for xrv8102.yml that also makes use of EFI, that includes a different -machinte type option (that supposed to be used for EFI) and also the following EFI options -drive if=pflash,format=raw,readonly=on,file=OVMF_CODE.fd. It worked from cli qemu-system-x86_64 but never using a template or instance settings.
I think as the other vEVO images are still using MBR, a new .yml file should be used in order to avoid conflict, right ?
i have done it and it worked this time. Although i m wondering as based on Juniper's xml file , something different was written:
==================
<ns0:commandline>
<ns0:arg value="-smbios"/>
<ns0:arg value="type=0,vendor=Bochs,version=Bochs"/>
<ns0:arg value="-smbios"/>
<ns0:arg value="type=3,manufacturer=Bochs"/>
<ns0:arg value="-bios"/>
<ns0:arg value="/usr/share/qemu/OVMF.fd"/>
<ns0:arg value="-smbios"/>
<ns0:arg value="type=1,manufacturer=Bochs,product=Bochs,serial=chassis_no=0:slot=0:type=1:assembly_id=0x0D20:platform=251:master=0:channelized=no"/>
</ns0:commandline>
</domain>
==================
The -bios option was suggested to be placed after the options for -smbios type=3, manufacturer=Bochs. Furthermore the value of -bios option is different....in any case it never worked for me.
Furthermore i was also looking the template for xrv8102.yml that also makes use of EFI, that includes a different -machinte type option (that supposed to be used for EFI) and also the following EFI options -drive if=pflash,format=raw,readonly=on,file=OVMF_CODE.fd. It worked from cli qemu-system-x86_64 but never using a template or instance settings.
I think as the other vEVO images are still using MBR, a new .yml file should be used in order to avoid conflict, right ?
-
- Posts: 5167
- Joined: Wed Mar 15, 2017 4:44 pm
- Location: London
- Contact:
Re: vJunosEvolved 24.2R1-S2.4 support in EVENG community
I think in further release I will add new eve template, witt UEFI boot.
Currently we have 2 templates.
1. vjunosevo.yml when vEVO used 2 images, first Juniper vEVO before single image
2. vjunosevo23.yml, single image template. This appeared when version 23 came out as single
3. now is new stuff addedd with UEFI. Means template: vjunosevoefi.yml
I created this template for EVE
https://gitlab.com/eve-ng-dev/templates ... evoefi.yml
Load this template in your EVE
create image folder: vjunosevoefi-xxxx
and load image
regarding xrv8102.yml
OVMF files are obtained from cisco with image.
Loaded with image in the folder:
root@eve-master:/opt/unetlab/addons/qemu/xrv8102-XR-24.1.1# ls
OVMF_CODE.fd OVMF.fd OVMF_VARS.fd virtioa.qcow2
For boot requires file OVMF_CODE.fd and it is very specific one from cisco.
Currently we have 2 templates.
1. vjunosevo.yml when vEVO used 2 images, first Juniper vEVO before single image
2. vjunosevo23.yml, single image template. This appeared when version 23 came out as single
3. now is new stuff addedd with UEFI. Means template: vjunosevoefi.yml
I created this template for EVE
https://gitlab.com/eve-ng-dev/templates ... evoefi.yml
Load this template in your EVE
create image folder: vjunosevoefi-xxxx
and load image
regarding xrv8102.yml
OVMF files are obtained from cisco with image.
Loaded with image in the folder:
root@eve-master:/opt/unetlab/addons/qemu/xrv8102-XR-24.1.1# ls
OVMF_CODE.fd OVMF.fd OVMF_VARS.fd virtioa.qcow2
For boot requires file OVMF_CODE.fd and it is very specific one from cisco.
-
- Posts: 4
- Joined: Mon Feb 10, 2025 2:10 pm
Re: vJunosEvolved 24.2R1-S2.4 support in EVENG community
Hi Uldis
i believed that the main difference between vjunosevo.yml and vjunosevo23.yml was the fact that after junosEVO23.2 (i don;t remember the exact release) there is no need to use bridges (rpio and pfe) in order to bring up and have operational interfaces of an EVO image anymore (i will try to find Juniper URL statement about that, but is obvious in the .yml files and in fact im actually labing with EVO images that way for some time already).
In fact based on EVENG guidelines in order to use vJunosEVO:
-- https://www.eve-ng.net/index.php/docume ... vo-router/
in fact points to the usage of vjunosevo.yml. I haven't been able to match the vjunosevo23.yml (with the diff i described earlier to my understanding-knowledge). So basically i used to modify vjunosevo.yml (eg by placing comments for rpio/pfe)
By the way, since im mainly using AMD for EVENG for my labs , i think there is no impact with the qemu options used ( nothing specific for Intel with -cpu options), so it will do the work (with the correct OVMF file you mentioned instead of Juniper's xml value).
Thank you for your time and help.
i believed that the main difference between vjunosevo.yml and vjunosevo23.yml was the fact that after junosEVO23.2 (i don;t remember the exact release) there is no need to use bridges (rpio and pfe) in order to bring up and have operational interfaces of an EVO image anymore (i will try to find Juniper URL statement about that, but is obvious in the .yml files and in fact im actually labing with EVO images that way for some time already).
In fact based on EVENG guidelines in order to use vJunosEVO:
-- https://www.eve-ng.net/index.php/docume ... vo-router/
in fact points to the usage of vjunosevo.yml. I haven't been able to match the vjunosevo23.yml (with the diff i described earlier to my understanding-knowledge). So basically i used to modify vjunosevo.yml (eg by placing comments for rpio/pfe)
By the way, since im mainly using AMD for EVENG for my labs , i think there is no impact with the qemu options used ( nothing specific for Intel with -cpu options), so it will do the work (with the correct OVMF file you mentioned instead of Juniper's xml value).
Thank you for your time and help.