LightBlog

lundi 14 décembre 2020

Google’s new Chrome Labs lets users test experimental browser features

Google constantly adds new features to Chrome with each new update. But before these features are released to the public, users can test them out by enabling the respective experimental flags in the chrome://flags page. However, keeping track of all the new flags can prove to be a bit difficult, as the chrome://flags page lists a whole bunch of them, and it provides no information about recent additions. To address this issue, Google is now testing a new feature called Chrome Labs, which will promote some of the new flags directly to users.

According to a recent report from ghacks, the new Chrome Labs feature is currently available in the latest Chrome Canary builds. But you have to enable the feature for it to show up in the first place. To do so, you’ll first have to ensure that you have the latest version of Chrome Canary (v 89.0.4353.0) installed on your system. Once you have installed the latest update, you’ll have to head over to the chrome://flags page and then search for ‘Chrome Labs.’

Chrome Canary Chrome Labs feature

You can then enable the ‘Chrome Labs’ flag and restart the browser for the changes to take effect. You should then see a new icon that looks like an Erlenmeyer flask next to the address bar. Clicking on the icon will produce a drop-down menu that lists all the experimental flags that are available via Chrome Labs.

As of now, the Chrome Labs feature lists two experimental features:

  • Reading List: A new option that will let you add tabs to a reading list by right-clicking on the tab or clicking on the bookmark star and selecting “add to reading list.” You can find all the tabs added to your reading list in the bookmarks option.
  • Tab Search: This feature adds a new tab search icon to the Chrome tab bar that you can use to easily search for a particular tab from all the tabs opened in the browser.

Much like with Chrome flags, you’ll need to restart the browser after enabling the aforementioned features for the changes to take effect. As of now, we have no further information about the Chrome Labs feature or when it may land on the stable channel. We’ll update this post as soon as Google releases more details about the feature.

The post Google’s new Chrome Labs lets users test experimental browser features appeared first on xda-developers.



from xda-developers https://ift.tt/2IKrvOr
via IFTTT

Nokia PureBook X14 laptop launched in India in association with Flipkart

Nokia has entered the Indian laptop market with its new Purebook X14 laptop. The notebook has been launched in partnership with Indian e-commerce player Flipkart and seems like a mid-range 14-inch machine targeted at mainstream users. Flipkart had started teasing the notebook last week and is the first laptop with the Nokia name since 2009 when it introduced the Nokia Booklet 3G netbook.

Nokia PureBook X14: Specifications

Specification Nokia PureBook X14
Dimensions & Weight
  • 320.2 x 214.5 x 16.8 mm
  • 1.1 kgs
Display
  • 14-inch Full HD (1920×1080) IPS
  • Dolby Vision
  • 178-degree viewing angle
Processor
  • Intel Core i5-10210U (1.6GHz / 4.2GHz)
GPU
  • Intel UHD 620
RAM & Storage
  • 8GB DDR4
  • 512GB NVMe SSD
Battery & Charger
  • 46.7 WHr (8 hours claimed)
  • 65W charger
I/O
  • 2 x USB 3.1 Type-A
  • USB 2.0 Type-A
  • USB 3.1 Type-C
  • HDMI
  • Ethernet
  • Kensington lock slot
  • Power-in
  • 3.5mm headphone/microphone combo jack
Connectivity
  • Intel Wireless-AC 9462 dual-band Wi-Fi
  • Bluetooth 5.0
OS
  • Windows 10 Home Plus
Other Features
  • Windows Hello face recognition
  • Backlit keyboard
  • Dolby Atmos

The PureBook X14 comes with a 14-inch Full-HD display with a 86% screen-to-body ratio. The panel uses an LED-backlit IPS unit claiming to offer viewing angles of 178-degree. There is also support for Dolby Vision for improved color reproduction. Judging by its dimensions, the notebook falls under the ultra-light category as it weighs 1.1 kg with a thickness of 16.8 mm. It comes in a single matte black color finish, and overall doesn’t look all that shabby.

As for the internals, the PureBook X14 is powered by the 10th-gen Intel Core i5 processor. While the company hasn’t mentioned the model, it is the Core i5-10210U quad-core processor that comes with a base clock of 1.6GHz and a boost clock of 4.2GHz. There is also 512GB of NVMe SSD along with 8GB of DDR4 memory, while graphics are handled by Intel’s UHD 620 chipset. There is standard dual-band Wi-Fi and Bluetooth 5.1 on the notebook as well. The notebook comes with two USB 3.1 Type-A ports, a USB 2.0 Type-A port, a USB 3.1 Type-C port, ethernet, HDMI, Kensington lock slot, a headphone-mic combo jack, and a dedicated power pin. Speaking of which, it is claimed to offer 8 hours of battery life and you get a 65W charger in the box.

“Launching the Nokia brand into this new product category is a testament to our successful collaboration with Flipkart. We are excited to offer consumers in India a Nokia branded laptop which brings innovation to address a gap in the market, as well as the style, performance, and reliability that the Nokia brand is known for,” said Vipul Mehrotra, Vice President – Nokia Brand Partnerships.

Other notable features of the notebook include Dolby Atmos audio support, face unlock with Windows Hello-certified HD IR webcam, a backlit keyboard with adjustable brightness and 1.4 mm key travel, and the notebook ships with Windows 10 Home Plus.

Pricing and Availability

The Nokia PureBook X14 will be available for pre-order on Flipkart starting December 18 at a launch price of ₹59,990 (~$814). At this range, Xiaomi offers the Mi NoteBook 14 Horizon Edition which offers a more powerful Core i7 processor along with NVIDIA’s GeForce MX350 graphics. Of course, Nokia’s offering a tad bit lighter and slimmer compared to Xiaomi’s offering.

The post Nokia PureBook X14 laptop launched in India in association with Flipkart appeared first on xda-developers.



from xda-developers https://ift.tt/349Dkp2
via IFTTT

Google details the technology behind Pixel’s Portrait Light feature

After several leaks and rumors, Google finally unveiled the Pixel 5 and Pixel 4a 5G earlier this year in September. As expected, the devices came with a host of new Google Camera features that set them apart from other Android phones on the market. These include Cinematic Pan for shake-free panning on videos, Locked and Active Stabilization modes, Night Sight support in Portrait Mode, and a Portrait Light feature to adjust the lighting portrait shots automatically. A few weeks after the launch, Google released most of these features for older Pixel devices via a Google Photos update. And now, the company has shared some details about the technology behind the Portrait Light feature.

As per a recent blog post from the company, the Portrait Light feature was inspired by the off-camera lights used by portrait photographers. It enhances portrait shots by modeling a repositionable light source that can be added to the scene. When added automatically, the artificial light source automatically adjusts the direction and intensity to complement the photo’s existing lighting using machine learning.

As Google explains, the feature makes use of novel machine learning models that were trained using a diverse dataset of photographs captured in the Light Stage computational illumination system. These models enable two algorithmic capabilities:

  • Automatic directional light placement: Based on the machine learning algorithm, the feature automatically places an artificial light source that is consistent with how a professional photographer would have placed an off-camera light source in the real world.
  • Synthetic post-capture relighting: Based on the direction and intensity of the existing light in a portrait shot, the machine learning algorithm adds a synthetic light that looks realistic and natural.

For the automatic directional light placement, Google trained a machine learning model to estimate a high dynamic range, omnidirectional illumination profile for a scene based on an input portrait. This new lighting estimation model can find the direction, relative intensity, and color of all light sources in the scene coming from all directions, considering the face as a light probe. It also estimates the head post of the subject using a MediaPipe Face Mesh. Based on the aforementioned data, the algorithm then determines the direction for the synthetic light.

Machine learning model to determine illumination Google Camera Portrait Lighting

Once the synthetic lighting’s direction and intensity are established, the next machine learning model adds the synthetic light source to the original photo. The second model was trained using millions of pairs of portraits, both with and without extra lights. This dataset was generated by photographing seventy different people using the Light Stage computational illumination system, which is a spherical lighting rig that includes 64 cameras with different viewpoints and 331 individually-programmable LED light sources.

Each of the seventy subjects was captured while illuminated one-light-at-a-time (OLAT) by each of the 331 LEDs. This generated their reflectance field, i.e., their appearance as illuminated by the discrete sections of the spherical environment. The reflectance field encoded the unique color and light-reflecting properties of the subject’s skin, hair, and clothing and determined how shiny or dull each material appeared in the photos.

These OLAT images were then linearly added together to render realistic images of the subject as they would appear in any image-based lighting environment, with complex light transport phenomena like subsurface scattering correctly represented.

Machine learning model to combine OLAT images to generate all possible lighting conditions

Then, instead of training the machine learning algorithm to predict the output relit images directly, Google trained the model to output a low-resolution quotient image that could be applied to the original input image to produce the desired output. This method is computationally efficient and encourages only low-frequency lighting changes without impacting high-frequency image details that are directly transferred from the input image to maintain quality.

Entire Portrait Lighting process in one image

Furthermore, Google trained a machine learning model to emulate the optical behavior of light sources reflecting off relatively matte surfaces. To do so, the company trained the model to estimate the surface normals given the input photo and then applied Lambert’s law to compute a “light visibility map” for the desired lighting direction. This light visibility map is then provided as input to the quotient image predictor to ensure that the model is trained using physics-based insights.

While all of this may seem like a lengthy process that would take the Pixel 5’s mid-range hardware a fair bit of time to process, Google claims that the Portrait Light feature was optimized to run at interactive frame-rates on mobile devices, with a total model size of under 10MB.

The post Google details the technology behind Pixel’s Portrait Light feature appeared first on xda-developers.



from xda-developers https://ift.tt/2IPSI2q
via IFTTT

dimanche 13 décembre 2020

This could be our first look at the OnePlus 9, the company’s early 2021 flagship 5G phone

We first heard about the OnePlus 9 back in October through a report which claimed OnePlus was planning to launch its 2020 flagship lineup earlier than its usual timeline. This was shortly followed by the revelation of probable codenames of the OnePlus 9 series, which indicated a Verizon variant in the works. We also got our first look at the OnePlus 9 through some leaked CAD renders that showed off the phone’s probable design. Now, some photos of what appears to be pre-production OnePlus 9 unit have surfaced on the internet, giving us the closest look yet of this upcoming flagship smartphone.

These photos were obtained by PhoneArena and very closely match what we have seen so far in the previous leaks. The photos show off the OnePlus 9 design in its full glory, leaving little to the imagination. Starting from the front, we can see the OnePlus 9 has a flat 6.5-inch hole-punch display with very slim bezels all around, closely resembling the OnePlus 8T. PhoneArena mentions it’s a 120Hz display with an aspect ratio of 20:9 and a resolution of 2400 x 1080 (FHD+). Above the display, we can see the earpiece grill, while at the very top, there’s a secondary noise-cancellation microphone.

OnePlus 9 leak OnePlus 9 front OnePlus 9 back

The SIM tray, USB C port, and a speaker are located at the bottom. Moving to the back, we see a rectangular camera module housing the triple camera assembly comprised of two large sensors and a smaller one along with Ultra Shot branding and an LED flash. There’s also an unusual logo located in the middle — pre-production test units often are tested around with different logos to keep them a mystery and avoid recognition, and the final phone will pretty much have the regular OnePlus branding.

OnePlus 9 SIM tray OnePlus 9 bottom

Apart from the overall design, the leak has also revealed some key specifications of the OnePlus 9. As per the screenshots obtained by PhoneArena, the unit in question is powered by a chipset called Lahaina, which is a codename for the new Qualcomm Snapdragon 888. The phone runs Android 11 along with a November security patch and has 8GB of RAM, 128GB of onboard storage, and a 4,500 mAh battery. Lastly, the screenshots reveal a 12MP primary camera and 4MP front camera, which we believe are binned values and will likely translate into 48MP and 16MP units.

The post This could be our first look at the OnePlus 9, the company’s early 2021 flagship 5G phone appeared first on xda-developers.



from xda-developers https://ift.tt/34dlCB4
via IFTTT

OPPO “slide-phone” is a triple-hinge foldable design concept

Foldables are widely predicted to be the future of smartphones, tasked with sprucing up the glass slabs into designs that have more individuality and character. We’ve seen a few iterations of final foldable hardware, but everyone does agree that there is still some more work to be done on this end. OPPO has some ideas on how foldables could work, and it is collaborating with Japanese design studio nendo to showcase a triple-hinge foldable design concept.

Tentatively called the “slide-phone”, this is primarily a design concept from OPPO, meaning that it exists only on paper so far with no working prototype. It’s a vision of a future that can materialize, and it’s interesting to look at the direction that OEMs think can be feasible for consumer technology.

The Slide-phone has three hinges, allowing it to get a triple fold. But unlike the concepts from Xiaomi and TCL, this design concept is meant to work more along the paths of a Galaxy Z Flip and not a Galaxy Z Fold. Sliding up one fold exposes 40mm of the screen with simple functions that don’t need a full screen, such as call history, notifications, and music player interfaces.

OPPO Slide-phone triple-hinge foldable Design Concept

A second reveals 80mm of the screen, which OPPO believes would be ideal for taking selfies. The cameras on this design concept rest only on the back, so a hinge mechanism here would work for front camera use-cases.

OPPO Slide-phone triple-hinge foldable Design Concept

There are no release dates, timelines, or any commitments on this design concept — there isn’t even a working prototype, as it explicitly is a design concept. But OPPO is no stranger to bringing some of these to life, as it did with the OPPO X 2021 rollable concept smartphone. So if there is enough interest, it might just happen.

The post OPPO “slide-phone” is a triple-hinge foldable design concept appeared first on xda-developers.



from xda-developers https://ift.tt/349MgdM
via IFTTT

Google introduces new Entity Extraction, Selfie Segmentation APIs to ML Kit

A few years ago, Google introduced ML Kit to make it easier for developers to implement machine learning into their apps. Since then, we’ve seen APIs for Digital Ink Recognition, On-Device Translation, and Face Detection. Now, Google is adding a new Entity Extraction to ML Kit along with a new Selfie Segmentation feature.

Google said the new Entity Extraction API will allow developers to detect and locate entities from raw text, and take action based on those entities.

“The API works on static text and also in real-time while a user is typing,” Google said. “It supports 11 different entities and 15 different languages (with more coming in the future) to allow developers to make any text interaction a richer experience for the user.”

Here are the entities that are supported:

  • Address (350 third street, cambridge)
  • Date-Time* (12/12/2020, tomorrow at 3pm) (let’s meet tomorrow at 6pm)
  • Email (entity-extraction@google.com)
  • Flight Number* (LX37)
  • IBAN* (CH52 0483 0000 0000 0000 9)
  • ISBN* (978-1101904190)
  • Money (including currency)* ($12, 25USD)
  • Payment Card* (4111 1111 1111 1111)
  • Phone Number ((555) 225-3556, 12345)
  • Tracking Number* (1Z204E380338943508)
  • URL (www.google.com, https://ift.tt/HC3Biz, seznam.cz)

Google Entity Extraction TamTam

Google said it’s been testing the Entity Extraction API with TamTam, and can provide helpful suggestions to users during chat conversations. When an address is on the screen, for example, clicking on it will bring up a menu to copy the address, open with another app, or get directions to the location.

The neural network annotators/models in the Entity Extraction API work as follows: A given input text is first split into words (based on space separation), then all possible word subsequences of certain maximum length (15 words in the example above) are generated, and for each candidate the scoring neural net assigns a value (between 0 and 1) based on whether it represents a valid entity.

Next, the generated entities that overlap are removed, favoring the ones with a higher score over the conflicting ones with a lower score. Then a second neural network is used to classify the type of the entity as a phone number, an address, or in some cases, a non-entity.

Google said ML Kit’s Entity Extraction API builds on technology that powered the Smart Linkify feature introduced with Android 10.

Google Selfie Segmentation

In addition to text-based Entity Extraction, Google also announced a new Selfie Segmentation API. The feature will allow developers to separate the background from a scene. This will enable users to add cool effects to selfies or even insert themselves into a better background. Google said the new API is capable of producing great results with low latency on both Android and iOS.

The ML Kit SDK incorporates years of Google’s work on machine learning into a Firebase package that mobile app developers can use to enhance their apps. Since ML Kit was introduced a number of APIs have been unveiled that makes implementing machine learning in apps much easier for developers. With Entity Extraction and Selfie Segmentation, apps of the future are going to get even better.

The post Google introduces new Entity Extraction, Selfie Segmentation APIs to ML Kit appeared first on xda-developers.



from xda-developers https://ift.tt/3qSyGW4
via IFTTT

Get everything you need to succeed with Samsung’s Work and Wellness Pack

At this point, you’re really starting to cut it close with holiday shopping. Items you order now may not make it in time for Christmas, and there isn’t much worse than missing just one piece of the perfect gift come the big day. So, why risk it when you can get the Samsung Work and Wellness Pack instead?

Samsung knew that some of us procrastinate when it comes to getting gifts, so enter the Work and Wellness Pack. This tech bundle includes pretty much everything you’d need for an optimal smartphone experience: The Galaxy Note 20 Ultra, Galaxy Buds Live, Galaxy Watch 3, and the Wireless Charger Pad Trio. Honestly, with a bundle like this, you’d have all the Samsung essentials all in one go. You only just need to pick up a case for your new phone!

Better yet, grabbing the Samsung Work and Wellness Pack will get you a hefty discount than buying these items separately. How does saving $498 sound? That more savings than the MSRP of the Galaxy Watch 3 itself! In addition, you’ll get two other freebies in the form of 6 months of Spotify Premium and 4 months of YouTube Premium.

The Work and Wellness Pack’s grand total is $1,650, but you can also sign up for monthly payments, bringing the total down to the much more affordable $45.84 for 36 months. Pick between the Black Pack and the Bronze Pack and have that gift for a loved one (or yourself) squared away!

    Samsung Work and Wellness Pack
    Available in the Black Pack or Bronze Pack, get the awesome Galaxy Note 20 Ultra, earbuds, a smartwatch, and a wireless charger all in one go! You'll save nearly $500 with the Work and Wellness Pack, too!

Don’t need the entire pack? You can check out our Galaxy Note 20 Ultra deals if you are just looking for a new smartphone! You can also head over to the Samsung Store’s smartphone offers and see how much you can save with a proper trade-in.

The post Get everything you need to succeed with Samsung’s Work and Wellness Pack appeared first on xda-developers.



from xda-developers https://ift.tt/348DzAC
via IFTTT