Quantcast
Jump to content


Recommended Posts

Posted

2021-06-14-01-banner.jpg

The Samsung Developers team works with many companies in the mobile and gaming ecosystems. We're excited to support our partner, Arm, as they bring timely and relevant content to developers looking to build games and high-performance experiences. This Vulkan Extensions series will help developers get the most out of the new and game-changing Vulkan extensions on Samsung mobile devices.

Android is enabling a host of useful new Vulkan extensions for mobile. These new extensions are set to improve the state of graphics APIs for modern applications, enabling new use cases and changing how developers can design graphics renderers going forward. In particular, in Android R, there has been a whole set of Vulkan extensions added. These extensions will be available across various Android smartphones, including the Samsung Galaxy S21, which was recently launched on 14 January. Existing Samsung Galaxy S models, such as the Samsung Galaxy S20, also allow upgrades to Android R.

One of these new Vulkan extensions for mobile are ‘maintenance extensions’. These plug up various holes in the Vulkan specification. Mostly, a lack of these extensions can be worked around, but it is annoying for application developers to do so. Having these extensions means less friction overall, which is a very good thing.

VK_KHR_uniform_buffer_standard_layout

This extension is a quiet one, but I still feel it has a lot of impact since it removes a fundamental restriction for applications. Getting to data efficiently is the lifeblood of GPU programming.

One thing I have seen trip up developers again and again are the antiquated rules for how uniform buffers (UBO) are laid out in memory. For whatever reason, UBOs have been stuck with annoying alignment rules which go back to ancient times, yet SSBOs have nice alignment rules. Why?

As an example, let us assume we want to send an array of floats to a shader:

#version 450

layout(set = 0, binding = 0, std140) uniform UBO
{
    float values[1024];
};

layout(location = 0) out vec4 FragColor;
layout(location = 0) flat in int vIndex;

void main()
{
    FragColor = vec4(values[vIndex]);
}

If you are not used to graphics API idiosyncrasies, this looks fine, but danger lurks around the corner. Any array in a UBO will be padded out to have 16 byte elements, meaning the only way to have a tightly packed UBO is to use vec4 arrays. Somehow, legacy hardware was hardwired for this assumption. SSBOs never had this problem.

std140 vs std430

You might have run into these weird layout qualifiers in GLSL. They reference some rather old GLSL versions. std140 refers to GLSL 1.40, which was introduced in OpenGL 3.1, and it was the version uniform buffers were introduced to OpenGL.

The std140 packing rules define how variables are packed into buffers. The main quirks of std140 are:

  • Vectors are aligned to their size. Notoriously, a vec3 is aligned to 16 bytes, which have tripped up countless programmers over the years, but this is just the nature of vectors in general. Hardware tends to like aligned access to vectors.
  • Array element sizes are aligned to 16 bytes. This one makes it very wasteful to use arrays of float and vec2.

The array quirk mirrors HLSL’s cbuffer. After all, both OpenGL and D3D mapped to the same hardware. Essentially, the assumption I am making here is that hardware was only able to load 16 bytes at a time with 16 byte alignment. To extract scalars, you could always do that after the load.

std430 was introduced in GLSL 4.30 in OpenGL 4.3 and was designed to be used with SSBOs. std430 removed the array element alignment rule, which means that with std430, we can express this efficiently:

#version 450

layout(set = 0, binding = 0, std430) readonly buffer SSBO
{
    float values[1024];
};

layout(location = 0) out vec4 FragColor;
layout(location = 0) flat in int vIndex;

void main()
{
    FragColor = vec4(values[vIndex]);
}

Basically, the new extension enables std430 layout for use with UBOs as well.

#version 450
#extension GL_EXT_scalar_block_layout : require

layout(set = 0, binding = 0, std430) uniform UBO
{
    float values[1024];
};

layout(location = 0) out vec4 FragColor;
layout(location = 0) flat in int vIndex;

void main()
{
    FragColor = vec4(values[vIndex]);
}

Why not just use SSBOs then?

On some architectures, yes, that is a valid workaround. However, some architectures also have special caches which are designed specifically for UBOs. Improving memory layouts of UBOs is still valuable.

GL_EXT_scalar_block_layout?

The Vulkan GLSL extension which supports std430 UBOs goes a little further and supports the scalar layout as well. This is a completely relaxed layout scheme where alignment requirements are essentially gone, however, that requires a different Vulkan extension to work.

VK_KHR_separate_depth_stencil_layouts

Depth-stencil images are weird in general. It is natural to think of these two aspects as separate images. However, the reality is that some GPU architectures like to pack depth and stencil together into one image, especially with D24S8 formats.

Expressing image layouts with depth and stencil formats have therefore been somewhat awkward in Vulkan, especially if you want to make one aspect read-only and keep another aspect as read/write, for example.

In Vulkan 1.0, both depth and stencil needed to be in the same image layout. This means that you are either doing read-only depth-stencil or read/write depth-stencil. This was quickly identified as not being good enough for certain use cases. There are valid use cases where depth is read-only while stencil is read/write in deferred rendering for example.

Eventually, VK_KHR_maintenance2 added support for some mixed image layouts which lets us express read-only depth, read/write stencil, and vice versa:

VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_STENCIL_READ_ONLY_OPTIMAL_KHR

VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_STENCIL_ATTACHMENT_OPTIMAL_KHR

Usually, this is good enough, but there is a significant caveat to this approach, which is that depth and stencil layouts must be specified and transitioned together. This means that it is not possible to render to a depth aspect, while transitioning the stencil aspect concurrently, since changing image layouts is a write operation. If the engine is not designed to couple depths and stencil together, it causes a lot of friction in implementation.

What this extension does is completely decouple image layouts for depth and stencil aspects and makes it possible to modify the depth or stencil image layouts in complete isolation. For example:

    VkImageMemoryBarrier barrier = {…};

Normally, we would have to specify both DEPTH and STENCIL aspects for depth-stencil images. Now, we can completely ignore what stencil is doing and only modify depth image layout.

    barrier.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
    barrier.oldLayout = VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL_KHR;
    barrier.newLayout = VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_OPTIMAL;

Similarly, in VK_KHR_create_renderpass2, there are extension structures where you can specify stencil layouts separately from the depth layout if you wish.

typedef struct VkAttachmentDescriptionStencilLayout {
    VkStructureType sType;
    void*          pNext;
    VkImageLayout      stencilInitialLayout;
    VkImageLayout      stencilFinalLayout;
} VkAttachmentDescriptionStencilLayout;

typedef struct VkAttachmentReferenceStencilLayout {
    VkStructureType sType;
    void*          pNext;
    VkImageLayout  stencilLayout;
} VkAttachmentReferenceStencilLayout;

Like image memory barriers, it is possible to express layout transitions that only occur in either depth or stencil attachments.

VK_KHR_spirv_1_4

Each core Vulkan version has targeted a specific SPIR-V version. For Vulkan 1.0, we have SPIR-V 1.0. For Vulkan 1.1, we have SPIR-V 1.3, and for Vulkan 1.2 we have SPIR-V 1.5.

SPIR-V 1.4 was an interim version between Vulkan 1.1 and 1.2 which added some nice features, but the usefulness of this extension is largely meant for developers who like to target SPIR-V themselves. Developers using GLSL or HLSL might not find much use for this extension. Some highlights of SPIR-V 1.4 that I think are worth mentioning are listed here.

OpSelect between composite objects

OpSelect before SPIR-V 1.4 only supports selecting between scalars and vectors. SPIR-V 1.4 thus allows you to express this kind of code easily with a simple OpSelect:

    MyStruct s = cond ? MyStruct(1, 2, 3) : MyStruct(4, 5, 6);

OpCopyLogical

There are scenarios in high-level languages where you load a struct from a buffer and then place it in a function variable. If you have ever looked at SPIR-V code for this kind of scenario, glslang would copy each element of the struct one by one, which generates bloated SPIR-V code. This is because the struct type that lives in a buffer and a struct type for a function variable are not necessarily the same. Offset decorations are the major culprits here. Copying objects in SPIR-V only works when the types are exactly the same, not “almost the same”. OpCopyLogical fixes this problem where you can copy objects of types which are the same except for decorations.

Advanced loop control hints

SPIR-V 1.4 adds ways to express partial unrolling, how many iterations are expected, and such advanced hints, which can help a driver optimize better using knowledge it otherwise would not have. There is no way to express these in normal shading languages yet, but it does not seem difficult to add support for it.

Explicit look-up tables

Describing look-up tables was a bit awkward in SPIR-V. The natural way to do this in SPIR-V 1.3 is to declare an array with private storage scope with an initializer, access chain into it and load from it. However, there was never a way to express that a global variable is const, which relies on compilers to be a little smart. As a case study, let us see what glslang emits when using Vulkan 1.1 target environment:

#version 450

layout(location = 0) out float FragColor;
layout(location = 0) flat in int vIndex;

const float LUT[4] = float[](1.0, 2.0, 3.0, 4.0);

void main()
{
    FragColor = LUT[vIndex];
}

%float_1 = OpConstant %float 1
%float_2 = OpConstant %float 2
%float_3 = OpConstant %float 3
%float_4 = OpConstant %float 4
%16 = OpConstantComposite %_arr_float_uint_4 %float_1 %float_2 %float_3 %float_4

This is super weird code, but it is easy for compilers to promote to a LUT. If the compiler can prove there are no readers before the OpStore, and only one OpStore can statically happen, compiler can optimize it to const LUT.

%indexable = OpVariable %_ptr_Function__arr_float_uint_4 Function
OpStore %indexable %16
%24 = OpAccessChain %_ptr_Function_float %indexable %index
%25 = OpLoad %float %24

In SPIR-V 1.4, the NonWritable decoration can also be used with Private and Function storage variables. Add an initializer, and we get something that looks far more reasonable and obvious:

OpDecorate %indexable NonWritable
%16 = OpConstantComposite %_arr_float_uint_4 %float_1 %float_2 %float_3 %float_4

// Initialize an array with a constant expression and mark it as NonWritable.
// This is trivially a LUT.
%indexable = OpVariable %_ptr_Function__arr_float_uint_4 Function %16
%24 = OpAccessChain %_ptr_Function_float %indexable %index
%25 = OpLoad %float %24

VK_KHR_shader_subgroup_extended_types

This extension fixes a hole in Vulkan subgroup support. When subgroups were introduced, it was only possible to use subgroup operations on 32-bit values. However, with 16-bit arithmetic getting more popular, especially float16, there are use cases where you would want to use subgroup operations on smaller arithmetic types, making this kind of shader possible:

#version 450

// subgroupAdd
#extension GL_KHR_shader_subgroup_arithmetic : require

For FP16 arithmetic:

#extension GL_EXT_shader_explicit_arithmetic_types_float16 : require

For subgroup operations on FP16:

#extension GL_EXT_shader_subgroup_extended_types_float16 : require

layout(location = 0) out f16vec4 FragColor;
layout(location = 0) in f16vec4 vColor;

void main()
{
    FragColor = subgroupAdd(vColor);
}

VK_KHR_imageless_framebuffer

In most engines, using VkFramebuffer objects can feel a bit awkward, since most engine abstractions are based around some idea of:

MyRenderAPI::BindRenderTargets(colorAttachments, depthStencilAttachment)

In this model, VkFramebuffer objects introduce a lot of friction, since engines would almost certainly end up with either one of two strategies:

  • Create a VkFramebuffer for every render pass, free later.
  • Maintain a hashmap of all observed attachment and render-pass combinations.

Unfortunately, there are some … reasons why VkFramebuffer exists in the first place, but VK_KHR_imageless_framebuffer at least removes the largest pain point. This is needing to know the exact VkImageViews that we are going to use before we actually start rendering.

With imageless frame buffers, we can defer the exact VkImageViews we are going to render into until vkCmdBeginRenderPass. However, the frame buffer itself still needs to know about certain metadata ahead of time. Some drivers need to know this information unfortunately.

First, we set the VK_FRAMEBUFFER_CREATE_IMAGELESS_BIT flag in vkCreateFramebuffer. This removes the need to set pAttachments. Instead, we specify some parameters for each attachment. We pass down this structure as a pNext:

typedef struct VkFramebufferAttachmentsCreateInfo {
    VkStructureType                        sType;
    const void*                                pNext;
    uint32_t                                   attachmentImageInfoCount;
    const VkFramebufferAttachmentImageInfo*    pAttachmentImageInfos;
} VkFramebufferAttachmentsCreateInfo;

typedef struct VkFramebufferAttachmentImageInfo {
    VkStructureType   sType;
    const void*       pNext;
    VkImageCreateFlags flags;
    VkImageUsageFlags usage;
    uint32_t          width;
    uint32_t          height;
    uint32_t          layerCount;
    uint32_t          viewFormatCount;
    const VkFormat*   pViewFormats;
} VkFramebufferAttachmentImageInfo;

Essentially, we need to specify almost everything that vkCreateImage would specify. The only thing we avoid is having to know the exact image views we need to use.

To begin a render pass which uses imageless frame buffer, we pass down this struct in vkCmdBeginRenderPass instead:

typedef struct VkRenderPassAttachmentBeginInfo {
    VkStructureType   sType;
    const void*       pNext;
    uint32_t          attachmentCount;
    const VkImageView* pAttachments;
} VkRenderPassAttachmentBeginInfo;

Conclusions

Overall, I feel like this extension does not really solve the problem of having to know images up front. Knowing the resolution, usage flags of all attachments up front is basically like having to know the image views up front either way. If your engine knows all this information up-front, just not the exact image views, then this extension can be useful. The number of unique VkFramebuffer objects will likely go down as well, but otherwise, there is in my personal view room to greatly improve things.

In the next blog on the new Vulkan extensions, I explore 'legacy support extensions.'

Follow Up

Thanks to Hans-Kristian Arntzen and the team at Arm for bringing this great content to the Samsung Developers community. We hope you find this information about Vulkan extensions useful for developing your upcoming mobile games.

The Samsung Developers site has many resources for developers looking to build for and integrate with Samsung devices and services. Stay in touch with the latest news by creating a free account or by subscribing to our monthly newsletter. Visit the Marketing Resources page for information on promoting and distributing your apps and games. Finally, our developer forum is an excellent way to stay up-to-date on all things related to the Galaxy ecosystem.

View the full blog at its source



  • Replies 0
  • Created
  • Last Reply

Top Posters In This Topic

Popular Days

Top Posters In This Topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Similar Topics

    • By Samsung Newsroom
      Samsung Electronics today unveiled its new AI-powered Interactive Display (WAFX-P model) at Bett 2025, Europe’s largest education technology exhibition. Through Samsung AI Assistant, the education-focused display combines advanced hardware and cutting-edge AI capabilities to create smarter learning environments that are more engaging and collaborative.
       
      “Samsung envisions a future where every classroom benefits from the transformative power of AI. By integrating advanced tools like generative AI and real-time transcription into our classroom displays with Samsung AI Assistant, we are not just enhancing learning — we are revolutionizing it,” said Hoon Chung, Executive Vice President of the Visual Display Business at Samsung Electronics. “We are committed to creating technology that inspires educators and students to explore, discover and grow together.”
       
       
      Transforming Classrooms With Samsung AI Assistant
      Samsung AI Assistant is a new educational solution built to adapt, engage and transform learning for the next generation. This dedicated teaching solution equips educators with intelligent, intuitive tools to organize lessons and transform traditional teaching into dynamic, interactive experiences that maximize learning outcomes.
       
      Samsung AI Assistant includes several innovative educational tools:
       
      Circle to Search instantly delivers search results from trusted sources when users simply circle on-screen images or text, making exploration during class effortless. AI Summary creates concise lesson recaps automatically, making lesson planning easier for teachers and simplifying post-class reviews for students. Live Transcript converts spoken words into text in real time for students to revisit and reinforce their classroom lessons.  

       
       
      Expanding Educational Opportunities Through Global Partnerships
      Samsung also plans to expand its AI services in the education market through partnerships with global AI companies.
       
      In collaboration with Google, Samsung aims to develop diverse and powerful AI scenarios for fostering digital classrooms of the future.
       
      The WAFX-P is Google EDLA-certified,1 as well, providing seamless access to services like Google Classroom and Google Drive, which further enrich the educational experience. With these features, Samsung’s AI Interactive Display acts as a powerful teaching assistant, fostering exploration and collaboration.
       
       
      Advanced Technology for AI-Powered Learning
      With 65”, 75” and 86” options, the full WAFX-P lineup supports powerful AI capabilities, with advanced hardware to deliver a seamless user experience. A neural processing unit (NPU) capable of performing up to 4.8 trillion operations per second ensures smooth operation of its AI features. Additional specifications include 16GB of RAM, 128GB of SSD storage and an octa-core CPU, collectively enabling efficient multitasking and resource-heavy educational applications. Its 450-nit max brightness, built-in 4K camera, microphone and 20-watt speaker create an immersive, multimedia-ready environment for video conferencing and collaborative learning.
       

       
      The WAFX-P is a versatile interactive display powered by Android 15 that includes software features to organize and maximize learning outcomes. Smart Note-On allows seamless transitions between handwriting and digital content, and File Converter simplifies the conversion of various file formats to enhance the display’s functionality and streamline workflows.
       
       
      Samsung’s Commitment to Long-Term Support
      In line with its commitment to empowering education, Samsung will continue to support its previous-generation displays, ensuring that the 2024 WAF series remains compatible with the new AI-based, interactive functionalities. This approach underscores the company’s commitment to delivering enduring value and innovation to educational institutions worldwide.
       
      Samsung looks forward to seeing its peers, customers and partners in person at booth #NE20 at Bett 2025. To learn more about Samsung’s interactive displays, please visit http://samsung.com.
       
       
      1 Enterprise Devices Licensing Agreement, a program Google introduced at the end of 2022 to help solutions providers offer devices with built-in Google Mobile Services.
      View the full article
    • By Samsung Newsroom
      Samsung Electronics today announced a partnership with game developers Nexon Korea and Neople to deliver unparalleled 3D experiences in the upcoming game ‘The First Berserker: Khazan.’ The game’s 3D elements are being specially customized and designed during development, utilizing Samsung technology and the advanced capabilities of the Odyssey 3D monitor to create an immersive 3D gaming experience.
       
      Through this partnership, Nexon, Neople and Samsung have been working closely to tailor the 3D visuals, carefully adjusting them based on the composition of characters, backgrounds and cinematics throughout the game. This process gives the developers an unprecedented level of control over the 3D effects, enabling them to bring their creative visions to life with precision.
       
      “The partnership between Samsung and Nexon will provide an unmatched gaming experience with ‘The First Berserker: Khazan’ on the Odyssey 3D monitor,” said Yongjae Kim, Executive Vice President of the Visual Display Business at Samsung Electronics. “We’ll continue to expand partnerships with premium gaming companies across the globe to deliver 3D gaming technology that provides the highest level of gaming immersion.”
       
      By fine-tuning the focal distance for 3D effects on a case-by-case basis, Nexon and Neople aim to develop specific scenarios that determine when and how elements become 3D. These efforts reduce crosstalk — a common issue causing visual overlap — while enhancing the clarity of epic boss battles and cinematic cutscenes.
       
      “This partnership marks a significant step in establishing a competitive 3D gaming model on a global scale, driving innovation across the whole industry,” said Choi Sung-wook, head of the publishing live division at Nexon. “We are excited for gamers worldwide to experience the vibrant graphics and intricate gameplay of ‘The First Berserker: Khazan,’ made possible by the cutting-edge technologies available to our developers and the industry-leading Samsung Odyssey gaming monitor.”
       
      ‘The First Berserker: Khazan’ is a hardcore action RPG (role playing game) where players experience Khazan’s legendary journey firsthand, immersing themselves in the world of Nexon’s leading franchise — the Dungeon & Fighter Universe. The game is set for global release on March 28 KST.
       
      The Odyssey 3D, unveiled at CES 2024 and recognized with the prestigious Best of Innovation award, will launch worldwide in April, delivering groundbreaking 3D technology that elevates gaming for players globally.
      View the full article
    • By Samsung Newsroom
      Samsung Electronics, a global leader in display technology, today announced it will showcase its highly anticipated 2025 hotel TV lineup at Integrated Systems Europe (ISE) 2025 in Barcelona. The new lineup combines sophisticated designs, cutting-edge in-room entertainment features and a unified platform, reinforcing Samsung’s commitment to redefining the premium hotel experience.
       
      “Our 2025 hotel TV lineup is designed to elevate the guest experience with personalization at its core,” said Hoon Chung, Executive Vice President of Visual Display Business at Samsung Electronics. “Hotels can customize their room’s ambiance with The Frame’s Art Mode, offer seamless streaming with Apple Airplay and the newly added Google Cast, and enable convenient control of the room environment through SmartThings Pro, all tailored to individual guest preferences.”
       
       
      Redefining Hotel Aesthetics With The Frame

      Samsung’s 2025 lineup introduces the award-winning The Frame (model name HL03F)1 as a hotel TV, bringing stunning 4K QLED picture quality with vivid colors, deep contrasts and lifelike visuals to every room, all while maintaining a sleek and sophisticated design that transforms interiors. With its innovative Art Mode, The Frame allows hotel managers to customize guest rooms and surroundings by displaying a curated selection of modern or classic artwork — or even other tailored images such as hotel-branded visuals — when the TV is not in use.
       
      Combined with an Anti-Reflection Matte Display that limits light interference and a Slim-Fit Wall Mount that enables the TV to sit flush against the wall like a true art piece, The Frame elevates hotel aesthetics with a unique blend of luxury and personalization.
       
      As the official visual display sponsor of Art Basel, The Frame reflects Samsung’s dedication to combining cutting-edge technology with timeless design. Its introduction sets a new benchmark for hotels seeking to deliver a refined, personalized and visually captivating experience for their guests.
       
       
      Enhancing Guest Experiences With Google Cast for Seamless Streaming
      ▲ Google Cast is a trademark of Google LLC.
       
      Samsung’s 2025 hotel TV lineup, along with the 2024 HBU8000, takes in-room entertainment to the next level by supporting Google Cast2 for secure, seamless, in-room streaming. Guests will be able to easily cast their favorite content directly from their Android and iOS devices to the TV without the need for additional dongles or login requirements. The connection process is quick and straightforward, utilizing QR codes for instant pairing.
       
      “We’re thrilled to bring Google Cast to Samsung TVs, extending the convenience of seamless content sharing to both hospitality and consumer models,” said Tiger Lan, Google’s Senior Director of engineering for multi-device experiences. “Whether guests are enjoying their favorite shows and playlists on hotel TVs or streaming effortlessly at home, this collaboration ensures users have reliable, personalized access to their content wherever they are.”
       
      The lineup also supports Apple AirPlay,3 allowing hotel guests to securely connect their iOS and iPadOS devices to the TVs in their rooms for an effortless streaming experience.

       
      Unlike traditional systems, both Apple Airplay and Google Cast can be used independently of third-party system integrator (SI) requirements, simplifying installation for hotels and reducing infrastructure demands. Furthermore, Samsung ensures robust privacy by ensuring that no personal information or device pairing data is stored, giving guests complete peace of mind during their stay.
       
       
      Streamlining Hotel Operations With a Connected Ecosystem
      Samsung’s 2025 hotel TV lineup introduces a fully connected ecosystem designed to enhance operations and elevate guest experiences. With Tizen 9.0, guests benefit from an intuitive and seamless interface that simplifies access to streaming apps, smart room controls and personalized services, creating a more enjoyable and connected stay.
       
      For hotel operators, LYNK Cloud offers a powerful cloud-based solution that combines customizable content delivery, remote device management and over-the-top (OTT) entertainment. Equipped with an e-commerce platform, it enables guests to perform service interactions such as ordering room service, booking hotel amenities and accessing a digital concierge. Simultaneously, hotel managers gain valuable insights to personalize guest content and deploy targeted promotions across rooms or properties globally, driving operational efficiency and guest satisfaction.
       
      SmartThings Pro further strengthens IoT connectivity by enabling secure and centralized control of Samsung Hospitality TVs, Smart Signage, air conditioning systems and more. With a scalable dashboard and customizable application programming interfaces (APIs), SmartThings Pro allows hotel IT teams to monitor performance, manage devices across multiple units and seamlessly integrate with existing systems.
       
       
      1 The functionality of The Frame in the hotel TV lineup may vary from the consumer version.
      2 Google Cast is a trademark of Google LLC. Google Cast supports Android 6 and above, and iOS 14 and above. Availability may vary by device, software version and region.
      3 AirPlay is compatible with iOS 11, iPadOS 13, macOS Mojave 10.14 and later versions. Availability may vary by device, software version and region.
      View the full article
    • By Samsung Newsroom
      Samsung Electronics today introduced its latest advancements in home audio technology with the new Q-series (HW-Q990F and HW-QS700F) soundbars These flagship models combine state-of-the-art hardware with intelligent AI-driven features, designed to elevate home entertainment with unparalleled sound
       
      “Our new soundbars combine exceptional audio quality with seamless convenience,” said Hun Lee, Executive Vice President of the Visual Display Business at Samsung Electronics. “With advanced AI technology in the HW-Q990F and the HW-QS700F’s innovative convertible design, these soundbars effortlessly adapt to any environment, delivering an immersive and personalized audio experience for every user.”
       
       
      HW-Q990F: Redefining Flagship Audio Performance

       
      As the successor to the highly acclaimed HW-Q990D, the HW-Q990F takes home audio to new heights. It features newly engineered dual active subwoofers that deliver robust bass and ultra-low-frequency precision. Plus, a new cube design reduces the subwoofers to half the size of its predecessor, minimizing resonance and blending flawlessly into modern interiors with a refined serrated finish.
       

       
      The HW-Q990F also delivers advanced AI-driven sound optimization through features such as:
       
      Dynamic Bass Control: Enhances clarity in low-frequency ranges by utilizing non-linear bass management for balanced and distortion-free sound. Q-Symphony: Immerses the user in 3D surround sound by detecting the position of wireless speakers like the Music Frame and automatically optimizing audio effects based on its distance and angle. Active Voice Amplifier Pro: Provides real-time content analysis that reduces background noise and emphasizes dialogue for an enhanced viewing experience.  
      Additionally, the HW-Q990F utilizes the Samsung TV’s Neural Processing Unit (NPU) in Q-Symphony mode, making dialogue clearer and delivering more immersive, synchronized audio.
       
       
      HW-QS700F: Versatility Meets Elegance and Superior Sound

       
      The HW-QS700F offers even more versatility in home audio with its sleek design and innovative gyro-sensor technology. Suitable for both wall-mounted and tabletop setups, this soundbar adapts to the user’s space. Its built-in gyro-sensor can automatically detect whether it’s positioned vertically or horizontally, fine-tuning the audio output to ensure optimal clarity and immersive sound in any configuration.
       

       
      With its slim, modern profile, the HW-QS700F integrates into any room. Its adaptive design offers flexibility without compromising style, making it the perfect companion for a cinematic wall-mounted display or a minimalist tabletop setup. This soundbar delivers an audio experience as sophisticated as its design — ideal for users who demand both elegance and performance.
       
      For more information, visit Samsung.com.
      View the full article
    • By Samsung Newsroom
      Samsung Electronics announced today that its Neo QLED and Lifestyle TVs have been awarded EyeCare Circadian certification by Verband Deutscher Elektrotechniker (VDE), a leading electrical engineering certification institute in Germany. This achievement highlights Samsung’s commitment to developing technology that supports natural circadian rhythms, promoting well-being and visual comfort.
       
      “This certification underscores our dedication to creating products that elevate the user experience by prioritizing eye comfort and reducing visual strain, all while delivering exceptional picture quality,” said Taeyong Son, Executive Vice President of Visual Display Business at Samsung Electronics. “We will continue to drive innovations that anticipate and adapt to users’ evolving needs, offering both immersive entertainment and sustainable long-term comfort.”
       
      The certification covers major models across the Neo QLED and Lifestyle lineups, including The Frame and The Serif. Six critical criteria are evaluated when determining when a product is fit for certification: Safety for Eyes, Gentle to the eyes, Flicker Level, Uniformity, Color Fidelity and Circadian Stimulus (CS) index — a framework designed to assess visual comfort and circadian rhythm alignment.
       
      Central to Samsung’s achievement is EyeComfort Mode, which automatically adjusts luminance and color temperature based on time of day and ambient lighting conditions. By simulating natural light patterns, the mode reduces strain during daytime viewing and fosters a restful nighttime environment, aligning with the body’s natural rhythms.
       
      This certification adds to a series of recognitions for Samsung TVs, highlighting their user-centric products and features. The Neo QLED series previously earned VDE’s Circadian Rhythm Display certification, while Samsung’s Lifestyle TVs received Glare-Free verification from Underwriters Laboratories (UL) for reducing visual strain. Additionally, for their exceptional color accuracy, Samsung TVs are Pantone Validated.
      View the full article





×
×
  • Create New...