AV1

Developed by Alliance for Open Media
Initial release 28 March 2018 (2018-03-28)
Type of format Compressed video
Contained by
Extended from
Open format? Yes
Website aomediacodec.github.io/av1-spec

AOMedia Video 1 (AV1), is an open, royalty-free video coding format designed for video transmissions over the Internet. It is being developed by the Alliance for Open Media (AOMedia), a consortium of firms from the semiconductor industry, video on demand providers, and web browser developers, founded in 2015. The AV1 bitstream specification includes a reference video codec.

AV1 is meant to succeed its predecessor VP9 and compete with HEVC/H.265 from the Moving Picture Experts Group.[1] It is the primary contender for standardization by the video standard working group NetVC of the Internet Engineering Task Force (IETF).[2] The group has put together a list of criteria to be met by the new video standard.[3]

AV1 is intended to be able to be used together with the audio format Opus in a future version of the WebM container format for HTML5 web video and WebRTC.[4]

History

The first official announcement of the project came with the press release on the formation of the Alliance on 1 September 2015.[5] The increased usage of its predecessor VP9 is attributed to confidence in the Alliance and development of AV1 as well as the pricey and complicated licensing situation of HEVC (High Efficiency Video Coding).[6][7]

The roots of the project precede the Alliance, however. Individual contributors started experimental technology platforms years before: Xiph's/Mozilla's Daala already published code in 2010, VP10 was announced on 12 September 2014,[8] and Cisco's Thor was published on 11 August 2015. The first version 0.1.0 of the AV1 reference codec was published on 7 April 2016.

Soft feature freeze was at the end of October 2017, but a few significant features were decided to continue developing beyond this. The bitstream format was projected to be frozen in January 2018;[9] however, this was delayed due to unresolved critical bugs as well as last changes to transformations, syntax, the prediction of motion vectors, and the completion of legal analysis.[10] The Alliance announced the release of the AV1 bitstream specification on 28 March 2018, along with a reference, software-based encoder and decoder.[11][12] On 25 June 2018, a validated version 1.0.0 of the specification was released.[13][14]

Martin Smole from AOM member Bitmovin says that the computational efficiency of the reference encoder is the greatest remaining challenge after the bitstream format freeze.[15] While still working on the format, the encoder was not targeted for productive use and didn't receive any speed optimizations. Therefore, it works orders of magnitude slower than e.g. existing HEVC encoders, and development is planned to shift its focus towards maturing the reference encoder after the freeze.

Purpose

AV1 aims to be a video format for the web that is both state of the art and royalty free.[16] The mission of the Alliance for Open Media is the same as that of the WebM project.[17]

To fulfill the goal of being royalty free, the development process is such that no feature is adopted before it has been independently double checked that it does not infringe on patents of competing companies.[17] This contrasts with its main competitor HEVC, for which a review of the intellectual property rights (IPR review) was not part of the standardization process.[6] The latter reviewing practice is stipulated in the ITU-T's definition of an open standard.

The possible existence of yet unknown patents has been a recurring concern in the field of royalty-free multimedia formats; the concern has been raised regarding AV1,[18] and previously VP9,[19] Theora[20] and IVC.[21] The problem of unforeseen patents is not unique to royalty-free formats, but it uniquely threatens their status as royalty-free. In contrast, IPR avoidance has not traditionally been a priority in MPEG's business model for royalty-bearing formats (although the MPEG chairman argues it has to change).[22]

Patent licensing AV1, VP9, Theora, etc HEVC, AVC, etc GIF, MP3, MPEG-1, etc
By known patent holders Royalty-free Royalty bearing Expired
By unknown patent holders Impossible to know until expiry

Under patent rules adopted from the World Wide Web Consortium (W3C), technology contributors license their AV1-connected patents to anyone, anywhere, anytime based on reciprocity, i.e. as long as the user does not engage in patent litigation.[23] As a defensive condition, anyone engaging in patent litigation loses the right to the patents of all patent holders.[6]

The performance goals include "a step up from VP9 and HEVC" in efficiency for a low increase in complexity.[17] NETVC's efficiency goal is 25% improvement over HEVC.[3] The primary complexity concern is for software decoding, since hardware support will take time to reach users.[17] However, for WebRTC, live encoding performance is also relevant, which is Cisco's agenda: Cisco is a manufacturer of videoconferencing equipment, and their Thor contributions aim at "reasonable compression at only moderate complexity".[24]

Feature wise, it is specifically designed for real-time applications (especially WebRTC) and higher resolutions (wider color gamuts, higher frame rates, UHD) than typical usage scenarios of the current generation (H.264) of video formats where it is expected to achieve its biggest efficiency gains. It is therefore planned to support the color space from ITU-R Recommendation BT.2020 and up to 12 bits of precision per color component.[25] AV1 is primarily intended for lossy encoding, although lossless compression is supported as well.[26]

AV1-based containers have also been proposed as a replacement for JPEG, similar to Better Portable Graphics and High Efficiency Image File Format which wrap HEVC.[27]

Technology

AV1 is a traditional block-based frequency transform format featuring new techniques, several of which were developed in experimental formats that have been testing technology for a next-generation format after HEVC and VP9.[28] Based on Google's experimental VP9 evolution project VP10,[29] AV1 incorporates additional techniques developed in Xiph's/Mozilla's Daala and Cisco's Thor.

libaom
Developer(s) Alliance for Open Media
Repository Edit this at Wikidata
Written in C, assembly
License FreeBSD (free)
Website aomedia.googlesource.com/aom

The Alliance published a reference implementation written in C and assembly language (aomenc, aomdec) as free software under the terms of the BSD 2-Clause License.[30] Development happens in public and is open for contributions, regardless of AOM membership.

There is another open source encoder, namely rav1e, which – unlike aomenc – aims to be the simplest and fastest conforming encoder at the expense of efficiency.[31]

The development process is such that coding tools are added to the reference codebase as experiments, controlled by flags that enable or disable them at build time, for review by other group members as well as specialized teams that help with and ensure hardware friendliness and compliance with intellectual property rights (TAPAS). Once the feature gains some support in the community, the experiment can be enabled by default, and ultimately have its flag removed when all of the reviews are passed.[32] Experiment names are lowercased in the configure script and uppercased in conditional compilation flags.[33]

Data transformation

To transform the error remaining after prediction to the frequency domain, AV1 uses square and rectangular DCTs, as well as an asymmetric DST[34] for blocks where the top and/or left edge is expected to have lower error thanks to prediction from nearby pixels.

It can combine two one-dimensional transforms in order to use different transforms for the horizontal and the vertical dimension (ext_tx[35]).[36]

Partitioning

T-shaped partitioning

Prediction can happen for bigger units (≤128×128), and they can be subpartitioned in more ways. "T-shaped" partitioning schemes for coding units are introduced, a feature developed for VP10. Two separate predictions can now be used on spatially different parts of a block using a smooth, wedge-shaped transition line (wedge-partitioned prediction).[37] This enables more accurate separation of objects without the traditional staircase lines along the boundaries of square blocks.

Parallelism within a frame is possible in tiles (vertical) and tile rows (horizontal).

More encoder parallelism is possible thanks to configurable prediction dependency between tile rows.[38]

Prediction

AV1 performs internal processing in higher precision (10 or 12 bits per sample), which leads to compression improvement due to smaller rounding errors in reference imagery.

Predictions can be combined in more advanced ways (than a uniform average) in a block (compound prediction), including smooth and sharp transition gradients in different directions (wedge-partitioned prediction) as well as implicit masks that are based on the difference between the two predictors. This allows combination of either two inter predictions or an inter and an intra prediction to be used in the same block.[39][37]

A frame can reference 6 instead of 3 of the 8 available frame buffers for temporal (inter) prediction.

The Warped Motion (warped_motion[40])[36] and Global Motion (global_motion[41]) tools in AV1 aim to reduce redundant information in motion vectors by recognizing patterns arising from camera motion.[38][36] They implement ideas that were tried to be exploited in preceding formats like e.g. MPEG-4 ASP, albeit with a novel approach that works in three dimensions. There can be a set of warping parameters for a whole frame offered in the bitstream, or blocks can use a set of implicit local parameters that get computed based on surrounding blocks.

For intra prediction, there are 56 (instead of 8) angles for directional prediction and weighted filters for per-pixel extrapolation. The "TrueMotion" predictor got replaced with a Paeth predictor which looks at the difference from the known pixel in the above left corner to the pixel directly above and directly left of the new one and then chooses the one that lies in direction of the smaller gradient as predictor. A palette predictor is available for blocks with very few colors like in some computer screen content. Correlations between the luminosity and the color information can now be exploited with a predictor for chroma blocks that is based on samples from the luma plane (cfl).[36] In order to reduce discontinuities along borders of inter-predicted blocks, predictors can be overlapped and blended with those of neighbouring blocks (overlapped block motion compensation). [42]

Quantization

AV1 has new optimized quantization matrices.[43]

Filters

For the in-loop filtering step, the integration of Thor's constrained low-pass filter and Daala's directional deringing filter has been fruitful: The combined Constrained Directional Enhancement Filter (cdef[44]) exceeds the results of using the original filters separately or together.[45][46] It is an edge-directed conditional replacement filter that smoothes blocks with configurable (signaled) strength roughly along the direction of the dominant edge to eliminate ringing artifacts.

There is also the loop restoration filter (loop_restoration) to remove blur artifacts due to block processing.[36]

Film grain synthesis (film_grain) improves coding of noisy signals using a parametric video coding approach. Due to the randomness inherent to film grain noise, this signal component is traditionally either very expensive to code or prone to get damaged or lost, possibly leaving serious coding artefacts as residue. This tool circumvents these problems using analysis and synthesis, replacing parts of the signal with a visually similar synthetic texture, based solely on subjective visual impression instead of objective similarity. It removes the grain component from the signal, analyzes its non-random characteristics, and instead transmits only descriptive parameters to the decoder, which adds back a synthetic, pseudorandom noise signal that's shaped after the original component.

Entropy coding

Daala's entropy coder (daala_ec[47][48]), a non-binary arithmetic coder, was selected for replacing VP9's binary entropy coder. The use of non-binary arithmetic coding helps evade patents, but also adds bit-level parallelism to an otherwise serial process, reducing clock rate demands on hardware implementations.[7] This is to say that the effectiveness of modern binary arithmetic coding like CABAC is being approached using a greater alphabet than binary, hence greater speed, as in Huffman code (but not as simple and fast as Huffman code). AV1 also gained the ability to adapt the symbol probabilities in the arithmetic coder per coded symbol instead of per frame (ec_adapt[49]).[36][6]

Former experiments that have been fully integrated

This list is no longer complete. It is being rewritten in prose.

Historic build-time flagExplanation
alt_intra[50]A new prediction mode suitable for smooth regions[36]
aom_qmQuantization Matrices[43]
cb4x4[51]
cdef[44]Constrained Directional Enhancement Filter: The merge of Daala's directional deringing filter + Thor's constrained low pass filter[45][52]
cdef_singlepassAn optimization of cdef[46]
cflChroma from Luma[36][53]
chroma_sub8x8[54]
compound_segment[55]
convolve_round[56]
delta_q[57]Delta quantization step: Arbitrary adaptation of quantizers within a frame[36][58]
daala_ec[47]The Daala entropy coder (a non-binary arithmetic coder)[48]
dual_filterAbility to choose between 4 horizontal and vertical interpolation filters for sub­pixel motion compensation[36] (three 8-tap and one 12-tap)[58]
ec_adapt[49]Adapts symbol probabilities on the fly.[36] As opposed to per frame, as in VP9.[6]
ec_smallmul[59]A hardware optimization of daala_ec[52]
ext_inter[60]Extended inter[38][36]: Weighted compound prediction with variable weights per block[58]
ext_intraExtended intra:[38] Generic directional intra predictor with 65 angular modes[36][58]
ext_refs[61]Extended reference frames:[36] Extends the number of references to six, and provides more flexibility on bi-prediction[58]
ext_tileOption of no dependency across tile rows[36]
ext_tx[35]Ability to choose different horizontal and vertical transforms[36][58]
filter_7bit[62]7-bit interpolation filters[63]
filter_intraInterpolate the reference samples before prediction to reduce the impact of quantization noise[36]
global_motion[41]Global Motion[38][36]
interintra[64]Inter-intra prediction, part of wedge partitioned prediction[37]
loop_restorationRemove blur artifacts due to block processing[36]
motion_var[65]Renamed from obmc.[66] Overlapped Block Motion Compensation: Reduce discontinuities at block edges using different motion vectors[36][58]
new_multisymbol[67]Code extra_bits using up to 5 non-adaptive symbols, starting from the LSB[68]
one_sided_compound[69]
palette[70]Palette prediction: Intra codig tool for screen content.[71]
palette_delta_encoding[72]
rect_intra_pred[73]
rect_tx[74]Rectangular transforms[75][58]
ref_mv[76]Better methods for coding the motion vector predictors through implicit list of spatial and temporal neighbor MVs[36][58]
smooth_hv[77]
tile_groups[78]Independent group of tiles. Inside this group, row of tiles could be independent or not[58]
txmgMerge high/low bitdepth transforms[79]
var_tx[80]Recursive transform block partition and coding scheme[81]
warped_motion[40]Warped Motion[36][58]
wedge[55]Wedge partitioned prediction[37]

Current experiments

Only explained experiments are listed.

Enabled by defaultBuild-time flag[82]Explanation
Yesdist_8x8A merge of former experiments cdef_dist and daala_dist.[33] Daala_dist is Daala's distortion function.[7]

Notable abandoned features

Daala Transforms implements discrete cosine and sine transforms that its authors describe as "better in every way" than the txmg set of transforms that prevailed in AV1.[83][84][85][86][87] Both the txmg and daala_tx experiments have merged high and low bitdepth code paths (unlike VP9), but daala_tx achieved full embedding of smaller transforms within larger, as well as using fewer multiplies, which could have further reduced the cost of hardware implementations. The Daala transforms were kept as optional in the experimental codebase until late January 2018, but changing hardware blocks at a late stage was a general concern for delaying hardware availability.[88]

The encoding complexity of Daala's Perceptual Vector Quantization was too much within the already complex framework of AV1.[7] The Rate Distortion dist_8x8 heuristic aims to speed up the encoder by a sizable factor, PVQ or not,[7] but PVQ was ultimately dropped.

ANS was the other non-binary arithmetic coder, developed in parallel with Daala's entropy coder. Of the two, Daala EC was the more hardware friendly, but ANS was the fastest to decode in software.[6]

Quality and efficiency

A first comparison from the beginning of June 2016[89] found AV1 roughly on par with HEVC, as did one using code from late January 2017.[90]

In April 2017, using the 8 enabled experimental features at the time (of 77 total), Bitmovin was able to demonstrate favorable objective metrics, as well as visual results, compared to HEVC on the Sintel and Tears of Steel animated films.[91] A follow-up comparison by Jan Ozer of Streaming Media Magazine confirmed this, and concluded that "AV1 is at least as good as HEVC now".[92]

Ozer noted that his and Bitmovin's results contradicted a comparison by Fraunhofer Institute for Telecommunications from late 2016[93] that had found AV1 38.4% less efficient than HEVC, underperforming even H.264/AVC, and justified this discrepancy by having used encoding parameters endorsed by each encoder vendor, as well as having more features in the newer AV1 encoder.

Tests from Netflix showed that, based on measurements with PSNR and VMAF at 720p, AV1 could be about 25% more efficient than VP9 (libvpx), at the expense of a 4–10 fold increase in encoding complexity.[94] Similar conclusions with respect to quality were drawn from a test conducted by Moscow State University researchers, where VP9 was found to require 31% and HEVC 22% more bitrate than AV1 for the same level of quality.[95] The researchers found that the used AV1 encoder was operating at a speed “2500–3500 times lower than competitors”, while admitting that it has not been optimized yet.[96]

In a comparison of AV1 against H.264 (x264) and VP9 (libvpx), Facebook showed about 45–50% bitrate savings over H.264 and about 40% over VP9 when using a constant quality encoding mode.[97]

AOMedia provides a list of test results on their website.

Profiles and levels

AV1 defines three profiles for decoders which are Main, High, and Professional. The Main profile allows for a bit depth of 8- to 10-bits per sample with 4:0:0 (greyscale) and 4:2:0 chroma sampling. The High profile allows for a bit depth of 8- to 10-bits per sample with 4:0:0, 4:2:0, and 4:4:4 chroma sampling. The Professional profile allows for a bit depth of 8- to 12-bits per sample with 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling.[13]

AV1 defines levels for decoders with maximum variables for levels ranging from 2.0 to 7.3. Example resolutions would be 426×240@30 fps for level 2.0, 854×480@30 fps for level 3.0, 1920×1080@30 fps for level 4.0, 3840×2160@60 fps for level 5.1, 3840×2160@120 fps for level 5.2 and 5.3, and 7680×4320@120 fps for level 6.2.[13] Level 7 has not been defined yet.

Adoption

Like its predecessor VP9, AV1 can be used inside WebM container files alongside the Opus audio format. These formats are well supported among web browsers, with the exception of Safari (only has Opus support) and the discontinued Internet Explorer (prior to Edge) (see VP9 in HTML5 video).

From November 2017 onwards, nightly builds of the Firefox web browser contained preliminary support for AV1.[98][99] Upon its release on 9 February 2018, version 3.0.0 of the VLC media player shipped with an experimental AV1 decoder. [100]

It is expected that Alliance members have interest in adopting the format, in respective ways, once the bitstream is frozen.[25][91] The member companies represent several industries, including browser vendors (Apple, Google, Mozilla, Microsoft), content distributors (Apple, Amazon, Facebook, Google, Hulu, Netflix) and hardware designers (AMD, Apple, Arm, Broadcom, Intel, Nvidia).[6][7][101] Video streaming service YouTube declared intent to transition to the new format as fast as possible, starting with highest resolutions within six months after the finalization of the bitstream format.[25] Netflix "expects to be an early adopter of AV1".[17]

According to Mukund Srinivasan, chief business officer of AOM member Ittiam, early hardware support will be dominated by software running on non-CPU hardware (such as GPGPU, DSP or shader programs, as is the case with some VP9 hardware implementations), as fixed-function hardware will take 12–18 months after bitstream freeze until chips are available, plus 6 months for products based on those chips to hit the market.[32] The bitstream was finally frozen on 28 March 2018, meaning chips could be available sometime between March and August 2019.[102] According to the above forecast, products based on chips could then be on the market at the end of 2019 or the beginning of 2020.

Mozilla researchers Nathan Egge and Michael Bebenita claimed in an interview in April 2018 that the web browser Mozilla Firefox would have AV1 support enabled by default by the end of 2018.[103]

AV1 support has been added to the MP4 container.[104]

YouTube released a beta launch playlist of AV1 videos at high bit-rates to test decoder performance and show their commitment to AV1.[105]

Software

AV1 Still Image File Format (AVIF)

The AV1 Still Image File Format (AVIF) is a file format wrapping compressed images based on the Alliance for Open Media AV1 intra-frame encoding toolkit. AVIF supports High Dynamic Range (HDR) and wide color gamut (WCG) images as well as standard dynamic range (SDR). Only the intra-frame encoding toolkit is used in AVIF version 1.0. Using the intra-frame encoding mechanism from an existing video codec standard has precedent in WebP (using VP8) and HEIF (using HEVC).

The initial version of AVIF seeks to be simple, with just enough structure to allow the distribution of images based on the AV1 intra-frame coding toolset. At its core, AVIF 1.0 will allow for one or more images plus all supporting data needed to correctly reconstruct and display the images to be conveyed in a file. The ability to embed a thumbnail image will also be provided. An image sequence with suggested playback timing may be defined.[119]

Target features

  • AV1 intra-frame codec toolkit
  • Multiple image storage: untimed unordered collection
  • Animation: timed sequence of images
  • Thumbnail image
  • Alpha channel
  • Extensible image metadata

References

  1. Zimmerman, Steven (15 May 2017). "Google's Royalty-Free Answer to HEVC: A Look at AV1 and the Future of Video Codecs". XDA Developers. Archived from the original on 14 June 2017. Retrieved 10 June 2017.
  2. Rick Merritt (EE Times), 30 June 2016: Video Compression Feels a Pinch
  3. 1 2 Sebastian Grüner (19 July 2016). "Der nächste Videocodec soll 25 Prozent besser sein als H.265" (in German). golem.de. Retrieved 1 March 2017.
  4. Tsahi Levent-Levi (3 September 2015). "WebRTC Codec Wars: Rebooted". BlogGeek.me. Retrieved 1 March 2017. The beginning of the end of HEVC/H.265 video codec
  5. "Alliance for Open Media established to deliver next-generation open media formats" (Press release). Alliance for Open Media. 1 September 2015. Retrieved 5 September 2015.
  6. 1 2 3 4 5 6 7 Timothy B. Terriberry (18 January 2017). "Progress in the Alliance for Open Media" (video). linux.conf.au. Retrieved 1 March 2017.
  7. 1 2 3 4 5 6 Timothy B. Terriberry (18 January 2017). "Progress in the Alliance for Open Media (slides)" (PDF). Retrieved 22 June 2017.
  8. Stephen Shankland (12 September 2014). "Google's Web-video ambitions bump into hard reality". CNET. Retrieved 13 September 2014.
  9. Krishnan, Jai (22 November 2017). "Jai Krishnan from Google and AOMedia giving us an update on AV1". YouTube. Retrieved 22 December 2017.
  10. Terriberry, Timothy B. (3 February 2018). "AV1 Codec Update". FOSDEM. Retrieved 8 February 2018.
  11. Alliance for Open Media (28 March 2018). "The Alliance for Open Media Kickstarts Video Innovation Era with "AV1" Release" (Press release). Wakefield, Mass.
  12. Shilov, Anton. "Alliance for Open Media Releases Royalty-Free AV1 1.0 Codec Spec". AnandTech. Retrieved 2 April 2018.
  13. 1 2 3 "AV1 Bitstream and Decoding Process Specification". Alliance for Open Media. Retrieved 26 June 2018. This version 1.0.0 of the AV1 Bitstream Specification corresponds to the Git tag v1.0.0 in the AOMediaCodec/av1-spec project. Its content has been validated as consistent with the reference decoder provided by libaom v1.0.0.
  14. Larabel, Michael (25 June 2018). "AOMedia AV1 Codec v1.0.0 Appears Ready For Release". www.phoronix.com. Retrieved 27 June 2018.
  15. Hunter, Philip (15 February 2018). "Race on to bring AV1 open source codec to market, as code freezes". Videonet. Mediatel Limited. Retrieved 19 March 2018.
  16. Daede, Thomas (5 October 2017). "AV1 Update". YouTube. Retrieved 21 December 2017.
  17. 1 2 3 4 5 Frost, Matt (31 July 2017). "VP9-AV1 Video Compression Update". Retrieved 21 November 2017. Obviously, if we have an open source codec, we need to take very strong steps, and be very diligent in making sure that we are in fact producing something that's royalty free. So we have an extensive IP diligence process which involves diligence on both the contributor level – so when Google proposes a tool, we are doing our in-house IP diligence, using our in-house patent assets and outside advisors – that is then forwarded to the group, and is then again reviewed by an outside counsel that is engaged by the alliance. So that's a step that actually slows down innovation, but is obviously necessary to produce something that is open source and royalty free.
  18. Jan Ozer (28 March 2018). "AV1 Is Finally Here, but Intellectual Property Questions Remain". Retrieved 21 April 2018.
  19. Jan Ozer (June 2016). "VP9 Finally Comes of Age, But Is it Right for Everyone?". Retrieved 21 April 2018.
  20. Silvia Pfeiffer (December 2009). "Patents and their effect on Standards: Open video codecs for HTML5". Retrieved 21 April 2018.
  21. Leonardo Chiariglione (28 January 2018). "A crisis, the causes and a solution". Retrieved 21 April 2018. two tracks in MPEG: one track producing royalty free standards (Option 1, in ISO language) and the other the traditional Fair Reasonable and Non Discriminatory (FRAND) standards (Option 2, in ISO language). (…) The Internet Video Coding (IVC) standard was a successful implementation of the idea (…). Unfortunately 3 companies made blank Option 2 statements (of the kind “I may have patents and I am willing to license them at FRAND terms”), a possibility that ISO allows. MPEG had no means to remove the claimed infringing technologies, if any, and IVC is practically dead.
  22. Leonardo Chiariglione (28 January 2018). "A crisis, the causes and a solution". Retrieved 21 April 2018. How could MPEG achieve this? Thanks to its “business model” that can simply be described as: produce standards having the best performance as a goal, irrespective of the IPR involved.
  23. Neil McAllister, 1 September 2015: Web giants gang up to take on MPEG LA, HEVC Advance with royalty-free streaming codec – Joining forces for cheap, fast 4K video
  24. Steinar Midtskogen, Arild Fuldseth, Gisle Bjøntegaard, Thomas Davies (13 September 2017). "Integrating Thor tools into the emerging AV1 codec" (PDF). Retrieved 2 October 2017. What can Thor add to VP9/AV1? Since Thor aims for reasonable compression at only moderate complexity, we considered features of Thor that could increase the compression efficiency of VP9 and/or reduce the computational complexity.
  25. 1 2 3 Ozer, Jan (3 June 2016). "What is AV1?". Streaming Media. Information Today, Inc. Archived from the original on 26 November 2016. Retrieved 26 November 2016. ... Once available, YouTube expects to transition to AV1 as quickly as possible, particularly for video configurations such as UHD, HDR, and high frame rate videos ... Based upon its experience with implementing VP9, YouTube estimates that they could start shipping AV1 streams within six months after the bitstream is finalized. ...
  26. "examples/lossless_encoder.c". Git at Google. Alliance for Open Media. Retrieved 29 October 2017.
  27. Shankland, Stephen (19 January 2018). "Photo format from Google and Mozilla could leave JPEG in the dust". CNET. CBS Interactive. Retrieved 28 January 2018.
  28. Romain Bouqueau (12 June 2016). "A view on VP9 and AV1 part 1: specifications". GPAC Project on Advanced Content. Retrieved 1 March 2017.
  29. Jan Ozer, 26 May 2016: What Is VP9?
  30. "LICENSE - aom - Git at Google". Aomedia.googlesource.com. Retrieved 26 September 2018.
  31. "The fastest and safest AV1 encoder". Retrieved 9 April 2018.
  32. 1 2 Ozer, Jan (30 August 2017). "AV1: A status update". Retrieved 14 September 2017.
  33. 1 2 Cho, Yushin (30 August 2017). "Delete daala_dist and cdef-dist experiments in configure". Retrieved 2 October 2017. Since those two experiments have been merged into the dist-8x8 experiment
  34. Jingning Han, Ankur Saxena, Vinay Melkote, and Kenneth Rose, Jointly Optimized Spatial Prediction and Block Transform for Video and Image Coding, IEEE Transactions on Image Processing, April 2012
  35. 1 2 Alaiwan, Sebastien (2 November 2017). "Remove experimental flag of EXT_TX". Retrieved 23 November 2017.
  36. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 "Analysis of the emerging AOMedia AV1 video coding format for OTT use-cases" (PDF). Archived from the original (PDF) on 20 September 2017. Retrieved 19 September 2017.
  37. 1 2 3 4 Converse, Alex (16 November 2015). "New video coding techniques under consideration for VP10 – the successor to VP9". YouTube. Retrieved 3 December 2016.
  38. 1 2 3 4 5 "Decoding the Buzz over AV1 Codec". 9 June 2017. Retrieved 22 June 2017.
  39. Mukherjee, Debargha; Su, Hui; Bankoski, Jim; Converse, Alex; Han, Jingning; Liu, Zoe; Xu (Google Inc.), Yaowu, "An overview of new video coding tools under consideration for VP10 – the successor to VP9", SPIE Optical Engineering+ Applications, International Society for Optics and Photonics, 9599, doi:10.1117/12.2191104
  40. 1 2 Alaiwan, Sebastien (31 October 2017). "Remove experimental flag of WARPED_MOTION". Retrieved 23 November 2017.
  41. 1 2 Alaiwan, Sebastien (30 October 2017). "Remove experimental flag of GLOBAL_MOTION". Retrieved 23 November 2017.
  42. Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe (19 September 2017). "Novel inter and intra prediction tools under consideration for the emerging AV1 video codec". Applications of Digital Image Processing XL, proceedings of SPIE Optical Engineering + Applications 2017. International Society for Optics and Photonics. 10396: 103960F. doi:10.1117/12.2274022.
  43. 1 2 Davies, Thomas (9 August 2017). "AOM_QM: enable by default". Retrieved 19 September 2017.
  44. 1 2 Barbier, Frederic (10 November 2017). "Remove experimental flag of CDEF". Retrieved 23 October 2017.
  45. 1 2 "Constrained Directional Enhancement Filter". 28 March 2017. Retrieved 15 September 2017.
  46. 1 2 "Thor update". July 2017. Retrieved 2 October 2017.
  47. 1 2 Egge, Nathan (25 May 2017). "This patch forces DAALA_EC on by default and removes the dkbool coder". Retrieved 14 September 2017.
  48. 1 2 Egge, Nathan (14 February 2017). "Daala Entropy Coder in AV1" (PDF).
  49. 1 2 Egge, Nathan (18 June 2017). "Remove the EC_ADAPT experimental flags". Retrieved 23 September 2017.
  50. Joshi, Urvang (1 June 2017). "Remove ALT_INTRA flag". Retrieved 19 September 2017.
  51. Mukherjee, Debargha (21 October 2017). "Remove CONFIG_CB4X4 config options". Retrieved 29 October 2017.
  52. 1 2 "NETVC Hackathon Results IETF 98 (Chicago)". Retrieved 15 September 2017.
  53. "xiphmont | next generation video: Introducing AV1, part1: Chroma from Luma". xiphmont.dreamwidth.org. Retrieved 10 April 2018.
  54. Su, Hui (23 October 2017). "Remove experimental flag of chroma_sub8x8". Retrieved 29 October 2017.
  55. 1 2 Mukherjee, Debargha (29 October 2017). "Remove compound_segment/wedge config flags". Retrieved 23 November 2017.
  56. Wang, Yunqing (12 December 2017). "Remove convolve_round/compound_round config flags". Retrieved 17 December 2017.
  57. Davies, Thomas (19 September 2017). "Remove delta_q experimental flag". Retrieved 2 October 2017.
  58. 1 2 3 4 5 6 7 8 9 10 11 Ian Trow (16 September 2018). Tech Talks: Codec wars (Recorded talk). IBC 2018 Conference. 28 minutes in. Retrieved 18 September 2018.
  59. Terriberry, Timothy (25 August 2017). "Remove the EC_SMALLMUL experimental flag". Retrieved 15 September 2017.
  60. Alaiwan, Sebastien (2 October 2017). "Remove compile guards for CONFIG_EXT_INTER". Retrieved 29 October 2017. This experiment has been adopted
  61. Alaiwan, Sebastien (16 October 2017). "Remove compile guards for CONFIG_EXT_REFS". Retrieved 29 October 2017. This experiment has been adopted
  62. Davies, Thomas (19 September 2017). "Remove filter_7bit experimental flag". Retrieved 29 October 2017.
  63. Fuldseth, Arild (26 August 2017). "7-bit interpolation filters". Retrieved 29 October 2017. Purpose: Reduce dynamic range of interpolation filter coefficients from 8 bits to 7 bits. Inner product for 8-bit input data can be stored in a 16-bit signed integer.
  64. Chen, Yue (30 October 2017). "Remove CONFIG_INTERINTRA". Retrieved 23 November 2017.
  65. Alaiwan, Sebastien (31 October 2017). "Remove experimental flag of MOTION_VAR". Retrieved 23 November 2017.
  66. Chen, Yue (13 October 2017). "Renamings for OBMC experiment". Retrieved 19 September 2017.
  67. Barbier, Frederic (15 November 2017). "Remove experimental flag of NEW_MULTISYMBOL". Retrieved 23 October 2017.
  68. "NEW_MULTISYMBOL: Code extra_bits using multi-symbols". Git at Google. Alliance for Open Media. Retrieved 25 May 2018.
  69. Liu, Zoe (7 November 2017). "Remove ONE_SIDED_COMPOUND experimental flag". Retrieved 23 November 2017.
  70. Joshi, Urvang (1 June 2017). "Remove PALETTE flag". Retrieved 19 September 2017.
  71. "Overview of the Decoding Process (Informative)". Retrieved 21 January 2018. For certain types of image, such as PC screen content, it is likely that the majority of colors come from a very small subset of the color space. This subset is referred to as a palette. AV1 supports palette prediction, whereby non-inter frames are predicted from a palette containing the most likely colors.
  72. Barbier, Frederic (15 December 2017). "Remove experimental flag of PALETTE_DELTA_ENCODING". Retrieved 17 December 2017.
  73. Yoshi, Urvang (26 September 2017). "Remove rect_intra_pred experimental flag". Retrieved 2 October 2017.
  74. Mukherjee, Debargha (29 October 2017). "Remove experimental flag for rect-tx". Retrieved 23 November 2017.
  75. Mukherjee, Debargha (1 July 2016). "Rectangular transforms 4x8 & 8x4". Retrieved 14 September 2017.
  76. Alaiwan, Sebastien (27 April 2017). "Merge ref-mv into codebase". Retrieved 23 September 2017.
  77. Joshi, Urvang (9 November 2017). "Remove smooth_hv experiment flag". Retrieved 23 November 2017.
  78. Davies, Thomas (18 July 2017). "Remove the CONFIG_TILE_GROUPS experimental flag". Retrieved 19 September 2017.
  79. Chiang, Angie (31 July 2017). "Add txmg experiment". Retrieved 3 January 2018. This experiment aims at merging lbd/hbd txfms
  80. Alaiwan, Sebastien (24 October 2017). "Remove compile guards for VAR_TX experiment". Retrieved 29 October 2017. This experiment has been adopted
  81. "Add support to recursive transform block coding". Git at Google. Alliance for Open Media. Retrieved 25 May 2018.
  82. "AV1 experiment flags". 29 September 2017. Retrieved 2 October 2017.
  83. "Daala-TX" (PDF). 22 August 2017. Retrieved 26 September 2017. Replaces the existing AV1 TX with the lifting implementation from Daala. Daala TX is better in every way: ● Fewer multiplies ● Same shifts, quantizers for all transform sizes and depths ● Smaller intermediaries ● Low-bitdepth transforms wide enough for high-bitdepth ● Less hardware area ● Inherently lossless
  84. Egge, Nathan (27 October 2017). "Daala Transforms in AV1".
  85. Egge, Nathan (1 December 2017). "Daala Transforms Update".
  86. Egge, Nathan (15 December 2017). "Daala Transforms Evaluation".
  87. Egge, Nathan (21 December 2017). "Daala Transforms Informational Discussion".
  88. "The Future of Video Codecs: VP9, HEVC, AV1". 2 November 2017. Retrieved 30 January 2018.
  89. Sebastian Grüner (9 June 2016). "Freie Videocodecs teilweise besser als H.265" (in German). golem.de. Retrieved 1 March 2017.
  90. "Results of Elecard's latest benchmarks of AV1 compared to HEVC". 24 April 2017. Retrieved 14 June 2017. The most intriguing result obtained after analysis of the data lies in the fact that the developed codec AV1 is currently equal in its performance with HEVC. The given streams are encoded with AV1 update of 2017.01.31
  91. 1 2 "Bitmovin Supports AV1 Encoding for VoD and Live and Joins the Alliance for Open Media". 18 April 2017. Retrieved 20 May 2017.
  92. Ozer, Jan. "HEVC: Rating the contenders" (PDF). Streaming Learning Center. Retrieved 22 May 2017.
  93. D. Grois, T, Nguyen, and D. Marpe, "Coding efficiency comparison of AV1/VP9, H.265/MPEG-HEVC, and H.264/MPEG-AVC encoders", IEEE Picture Coding Symposium (PCS) 2016 http://iphome.hhi.de/marpe/download/Preprint-Performance-Comparison-AV1-HEVC-AVC-PCS2016.pdf
  94. "Netflix on AV1". Streaming Learning Center. 30 November 2017. Retrieved 8 December 2017.
  95. "MSU Codec Comparison 2017" (PDF). 17 January 2018. Retrieved 9 February 2018.
  96. Ozer, Jan (30 January 2018). "AV1 Beats VP9 and HEVC on Quality, if You've Got Time, says Moscow State". Streaming Media Magazine. Retrieved 9 February 2018.
  97. "AV1 beats x264 and libvpx-vp9 in practical use case". Facebook Code. Retrieved 17 April 2018.
  98. Shankland, Stephen (28 November 2017). "Firefox now lets you try streaming-video tech that could be better than Apple's". CNET. Retrieved 25 December 2017.
  99. https://hacks.mozilla.org/2017/11/dash-playback-of-av1-video/%5Bself-published+source%5D
  100. "VLC 3.0 Vetinari". 10 February 2018. Retrieved 10 February 2018.
  101. Nick Stat (4 January 2018). "Apple joins group of tech companies working to improve online video compression". The Verge. Retrieved 10 January 2018.
  102. Ozer, Jan (28 March 2018). "AV1 Is Finally Here, but Intellectual Property Questions Remain". Streaming Media Magazine. Retrieved 26 September 2018.
  103. Ozer, Jan (16 April 2018). "NAB 2018: Mozilla Talks Daala, Firefox, and AV1". Streaming Media Magazine. Retrieved 26 September 2018.
  104. "AV1 Codec ISO Media File Format Binding". cdn.rawgit.com. Retrieved 14 September 2018.
  105. "AV1 Beta Launch Playlist - YouTube". YouTube. Retrieved 15 September 2018.
  106. "DASH playback of AV1 video in Firefox – Mozilla Hacks - the Web developer blog". Mozilla Hacks – the Web developer blog. Retrieved 20 March 2018.
  107. "AV1 Decoder - Chrome Platform Status". www.chromestatus.com. Retrieved 14 September 2018.
  108. "AV1 Decode - Chrome Platform Status". www.chromestatus.com. Retrieved 28 June 2018.
  109. "VLC release notes".
  110. "GStreamer 1.14 release notes". gstreamer.freedesktop.org. Retrieved 20 March 2018.
  111. "Download FFmpeg". www.ffmpeg.org. Retrieved 22 April 2018.
  112. "FFmpeg". www.ffmpeg.org. Retrieved 22 April 2018.
  113. "mpv v0.29.0". github.com/mpv-player/mpv. Retrieved 16 September 2018.
  114. "MKVToolNix v22.0.0 release notes".
  115. "MKVToolNix v22.0.0 released | mosu's Matroska stuff". www.bunkus.org. Retrieved 3 May 2018.
  116. "MediaInfo 18.03". Neowin. Retrieved 3 May 2018.
  117. "Encoding Release Notes". Bitmovin Knowledge Base. Retrieved 9 July 2018.
  118. "clsid2/mpc-hc". GitHub. Retrieved 14 September 2018.
  119. "AV1 Still Image File Format (AVIF)". aomediacodec.github.io. Retrieved 15 April 2018.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.