Errors in Audiovisual Translation

Errors in Audiovisual Translation

By Donald Ducy

Introduction

It is the purpose of this paper to bring to light the concepts surrounding audiovisual translation by approaching it from the level of declaring definitions of the term audiovisual translation to the level of bringing to light the practicality of the concept by discussing the sitcom movie Friends. The term “acoustic translation” refers to transferring a linguistic component in an audiovisual work or product from one language to another. In this paper, two definitions of the other authors. Proposed, but all lower the same conclusion. The acoustic translation is one of the language components contained in audiovisual works and products and is a term used to refer to the transition to another language. This study acknowledges that the concept of audiovisual translation, as it is currently utilized in the film business of the modern world, has its roots to understand it better. It has a long and illustrious history that dates back to pre-colonial times. Subtitling, dubbing, and voiceover are some of the techniques used in audiovisual translation discussed in the paper. In the study, it is said that when undertaking the processes or instead techniques of audiovisual translation described above, the translators are prone to making mistakes because of their inexperience. During this discussion, the example of the Friends television show and movie is brought into play. The study uses audiovisual translation issues seen in the television series Friends to bring to light the majority of the flaws, including line breaks, incorrect pacing, and the usage of more than two lines, among other things.

Defining AVT

Audiovisual Translation (AVT) is a term used to refer to the process of transferring the linguistic components present in an audiovisual work or product from one language to another (Sullivan, 2018). Audiovisual items such as feature films, TV shows, stage plays, musicals, operas, websites, and video games are just a few examples of the numerous audiovisual products that require translation available. As the name suggests, audiovisual is designed for simultaneous listening (voice) and viewing (visual), but they are primarily intended for viewing. Even though many authors define the term differently, an in-depth examination reveals that they all converge simultaneously. It would be beneficial to gain insight into the minds of other authors. The translation of polysemiotic materials that are shown onscreen to large audiences, according to another source, is considered audiovisual translation. Multimedia products such as films, documentaries, television programs, and other similar works must be translated into other languages to reach a broader audience and boost their popularity and consumption levels. Subtitling and dubbing are two techniques that can be used to translate audiovisual materials. Regardless of the method chosen, the translation of the source material must be accomplished by applying various translation techniques, such as literal translation, reduction, and modulation. These translation procedures and techniques will be addressed in greater depth in the present thesis through a comparative analysis of the Spanish translation of the English Film The Lord of the Rings: The Two Towers.

History of Audio Visual Translation (Subtitling, Dubbing & Voice-Over)

Since various audiovisual translation techniques have been developed since the beginning of the whole field of audiovisual translation, in this paper, each methodology is individually addressed while studying its history. Understanding their past is essential for understanding their current state as they can predict future audiovisual translation developments. Consequently, this study aims to discuss the past, present, and future of the region.

Intertitles are the first captioning method to add value to a movie or movie by providing conversations and providing additional information about problematic locations and scenes. The title card, also known as the title image, was a still image of the text inserted into the videography to provide the context of a silent film (González & Luis, 2014). A fragment of Edwin S. Porter made for the Uncle Tom Cabin in 1903, Uncle Tom’s Cabin is known as the first intertitle use. Since the early 1920s, subtitles have come a long way in terms of sophistication. Mechanical and chemical processes had to be tested and experimented with before integrating and utilizing them to ensure that the processes were as convenient and effective as possible. Laser subtitling, which uses a laser to melt or reduce the final film copy emulsion, continues with the most common method available today.

There are many ways to include subtitles or captions, but all effort is wasted if the characters themselves are inconsistent or error-prone. Therefore, it is always recommended to use a professional service such as renaissance transform for subtitles in your video or your subtitle requirements.

Technically, the first film was made in 1896, but in 1906, the first feature film was released. It did not sound as expected of the title “The Story of the Kelly Gang.” In the silent film era, an “inter-title” or “title card” was adopted to replace speech or complex narration in the story of a film. The title card is a video. These are small text boxes added between sequences. In many ways, they have been an essential part of the storytelling process, comparable to today’s narration. They are easily translated and are very effective for an international audience.

In 1927, the sound of film production became feasible, and it was this year, the jazz singer was released on the big screen. As expected, the audience was enthusiastic about the sound. As a result, the characters become more realistic, and the storytelling is more refined. There was only one error in the plan. With the introduction of sound, the number of inter-titles increased. As a result, directors will have to dub their films to attract an international audience. Subtitles were developed due to the high cost of reshooting films from foreign speeches or dubbing foreign narrations in rhythm with the original footage.

Subtitles are an excellent solution to fill the gap between inter-title and audio speech. It is displayed at the bottom of the film to allow foreign audiences to read the Dialogue, and it can be translated without disturbing the flow of the image.

Subtitles have evolved through history since their introduction in the 1920s, becoming increasingly complex and appealing to a broader audience.

Subtitles and subtitles are used by many people worldwide and are becoming more and more popular. English subtitles are translated into various languages ​​and can be viewed by overseas viewers. It can be enabled for various video-on-demand services, broadcast shows, and video sharing sites such as YouTube, DVDs, and other video formats. Since the early 1920s, subtitles have come a long way in terms of sophistication. Mechanical and chemical processes had to be tested and experimented with before integrating and utilizing them to ensure that the processes were as convenient and effective as possible. Laser subtitling, which uses a laser to melt or reduce the final film copy emulsion, continues with the most common method available today. There are many ways to include subtitles or captions, but all effort is wasted if the characters themselves are inconsistent or error-prone. Therefore, it is always recommended to use a professional service such as renaissance transform for subtitles in your video or your subtitle requirements.

Technically, the first film was made in 1896, but in 1906, the first feature film was released. It did not sound as expected of the title “The Story of the Kelly Gang.” In the silent film era, an “inter-title” or “title card” was adopted to replace speech or complex narration in the story of a film. The title card is a video. These are small text boxes added between sequences. In many ways, they have been an essential part of the storytelling process, comparable to today’s narration. They are easily translated and are very effective for an international audience.

In 1927, the sound of film production became feasible, and it was this year, the jazz singer was released on the big screen. As expected, the audience was enthusiastic about the sound. As a result, the characters become more realistic, and the storytelling is more refined. There was only one error in the plan. With the introduction of sound, the number of inter-titles increased. As a result, directors will have to dub their films to attract an international audience. Subtitles were developed due to the high cost of reshooting films from foreign speeches or dubbing foreign narrations in rhythm with the original footage. Subtitles are an excellent solution to fill the gap between inter-title and audio speech. It is displayed at the bottom of the film to allow foreign audiences to read the Dialogue, and it can be translated without disturbing the flow of the image. Subtitles have evolved through history since their introduction in the 1920s, becoming increasingly complex and appealing to a broader audience.

Subtitles and subtitles are used by many people worldwide and are becoming more and more popular. English subtitles are translated into various languages ​​and can be viewed by overseas viewers. It can be enabled for various video-on-demand services, broadcast shows, and video sharing sites such as YouTube, DVDs, and other video formats.

The requirement for subtitles was adopted in the UK in the 1990s to make video programs accessible to the deaf and hard of hearing. Subtitles have gone one step further from these limitations and are no longer an option but rather a requirement. Subtitle regulation specifies improvements to the content of subtitles to make them more accessible. It’s been airing over the years with Line21 speaker IDs, sound effects, music, and contextual information, and these features are being introduced. According to recent research, subtitles are increasingly being used for educational purposes, both as a tool for those who want to learn a foreign language and as a means to support the early development of reading in adolescents. Subtitles have, over time, come to be widely recognized as beneficial in the educational field. Forced captioning is widely used in documentary films and TV shows, often providing additional contextual information or translating speech into foreign languages.

The video production industry is struggling to keep up with the growing amount of video material. Since the method of automatic subtitles and subtitles is often used, the quality of subtitles is reduced as a result. Viewers today are exposed to errors in live captions. Real-time captions rely on shorthand or speech recognition algorithms to capture the action on the screen. In offline subtitles, the results are often significantly worse because the text on the screen is entirely machine-generated and aired frequently without being reviewed.

It is generally believed that the future of subtitles is promising considering the past. Offline subtitles for broadcasts over the years have been of exceptional quality. Ofcom is overseeing the captioning of broadcasts to ensure quality, and as a result, these regulations, video-on-demand, and catch-up services are expected to improve. The explosion in the popularity of online video-on-demand services has made a considerable difference in the capabilities of video material subtitles. But things are changing under the new legislation that regulates video-on-demand (VOD) subtitles in the UK.

The term dubbing is considered to have originated in the United States; this is the first successful attempt at synchronizing audio and video and may have been used to refer to the process of “doubled” or copied a Vitaphone sound disc. Or maybe it comes from the actor’s voice “double” or post-mortem motive, which was a very early need. The conversion to sound was so loud that even a movie that was already made was converted to sound in less than a day. One such example is in the movie ‘The Canary Murders,’ where star Louis Brooks refused to reshoot the silent part, so another actor “doubles” her voice, imitating the sound scene instead.

Whatever its origins, dubbing, like any other production component, has always been governed by currently available technology. Therefore, any discussion of the art of dubbing needs to take into account advances in film technology. According to Bob Allen, a research partner at the Motion Picture Acoustics Society, many modern ideas can be traced back to the first decades of the 20th century, who investigated early audio patents. For example, an optical stereo in the early twenties was patented six years ago when the pattern for a radio microphone was unveiled in 1917 when an integrated optical print would first be offered. Bob Allen said, “We had to wait for a lot of advances to be made possible by transistors, and ultimately digital technology.”

Certain directors, especially DW Griffiths, decided not to include sound in the film during silent films. Griffith believes that silent films are a dramatic medium for the whole world that can impress the emotions of the whole world, and his films are offered along with the entire musical work that is distributed to everyone. I was cautious about being done.

On the other hand, one of the most striking reasons for the development of sound film is the bow devil and opera singers who moved toward the camera. At the same time, the audio-video band played the theme song in the early 20th century and the 20th century! (See: The Silence Sound) The movie’s inventor, Thomas Edison, first ran tests for sound and movie synchronization. In 1895 he made his film phone debut. However, the existing technology at that time has not yet wholly blurred the amplification technology. This was followed by a Gaumont chromophore and a surprised camera phone. After years of research, Lee De Forest finally unveiled the world’s first practical optical sound Phono for film technology in 1923.

Unfortunately for him, the cable/telephone colossus Bell/AT&T made a competing system track at the same time. The Vitaphone spun at 33.3 rpm and recorded 9 minutes of sound on a 16-inch shellac disc manufactured by Western Electric. It was the first successful commercial recording device. By consumer standards of 78 rpm, slowing it down significantly increases the signal-to-noise ratio. An immediate investigation of a feasible technology that may be used in cinemas shortly after President Harding’s staggering innovation debut in 1920, when President Harding reported his recorded speeches to entire auditoriums in New York and San Francisco. The issue was resolved in 1926, and Warner Brothers could secure an exclusive sound-on disc license for Vitaphone devices.

1926 was an unforgettable year. The national radio broadcast is Warner’s first sound film Don Juan, launched in the United States a month ago, and the following month was followed shortly after by Fox’s movie Ton News. Don Juan has been limited to music and sound effects, but he has delighted many people this summer. But after beating the money and making $100,000 a week in theaters a year later, the stock rose 600% in two years, the legendary jazz singer who supported Warner’s life in major leagues.

Currently, there are only four “speaking” pieces to sustain a record, which is just enough time for a jazz singer to fit in one 35mm role. The projectionist got his hands on a frame of clues that started playing the Vitaphone disc. That’s no problem because the novelty factor prevailed without any fault. In other words, to develop a Fox phrase, it must first be distinguished by that it is a talking image. The first Movietone news film used the photophone’s sound-on-film technology, released a month before the publication of The Jazz Singer, to get rid of the DeForest system, but this ended the old-fashioned way obscurely. Although it was a mere commentary rather than a sound sync sound (which also hadn’t been completed in five years), the method’s flexibility was revealed in films across the country the following year on a historic transatlantic flight within 24 hours. The variety of methods presented by Lindbergh appeared.

This song has spread like wildfire all over the internet. However, combining the current (optical) soundtrack was technically impossible. There was only one layer of sound heard at a time, but whether it was a dialogue or a song with an orchestra, the film’s soundtrack required the services of an orchestra to provide incidental music. The dubbing consisted mainly of creating ‘blue ink’ on the bonding joint to prevent interference with the optical track before bonding the raw 35mm optical sound negatives and to obtain a film wrap in the resulting composite. It was placed between the image and the sprocket hole or hole, not between them, and the hole is noteworthy.

The Laurel & Hardy films of 1932 and 1933 first seem to have included music under the dialogue track. This was achieved very reliably using continuously varying levels of optical superposition. The optical sound was first used along with the visual method because of the difficulty of mixing. For example, doing a dissolve and leaving the scene fades, then the film loops back to the camera, so the incoming scene is doubly faded. Exposure (González & Luis.2009). Initially, standard cinema film was used for recording. Still, since the physical properties of the particles produce a lot of noise, a unique emulsion was developed to reduce the amount of noise introduced and shift the noise upwards in the spectrum. To further soften the look of this photo, an exposure known as “pre-flash” was used. Another advantage came early as a result of insufficient recording.

When it comes to synchronizing the sound, it’s a matter of hit and miss at first. It has brought terrifying complexity to post-production: Baby Boomers and Essa categories used by recorders to describe the most hated actors. The original microphone was essentially the same carbon Omni-directional transducer used in phones when it was invented. (The 1928 film ‘The Lights of New York’ recorded an actor’s conversation with a microphone pole candle phone working on his desk.) They made a loud “fall” in pain in a humid situation. Subjects are no more than 2 feet apart, and prehistoric subjects of unchecked subjects should be as close as possible to the recorder. In the rain, because the pearl necklace is louder than her voice, Debbie Reynolds will have to overdub the show, and for it to happen, she shouldn’t have to experience the tree singing problem. God’s Louis Brooks was deliberately unable to comprehend it. The most severe complexity of the

In the first few years, sound on film was the fierce battle between two competing optical encoding systems, which made the process more complicated. Several studios, including MGM, were the pioneers of Western Electric’s variable density system, following Fox (then 20th Century Fox). The sound was recorded with horizontal lines (similar to barcodes) that change near higher frequencies with this system. Volume-dependent densities, RKO and Republic, and other studios used RCA’s variable region system along Warner Bros. to produce varying analog waveforms of amplitude and frequency. Let sound on film solve quickly. Vitaphone has begun to decline. With a unique patent exchange agreement between RCA and Western Electric’s parent company, AT & T, RCA offers the opportunity to license AT & T’s patents. Due to the technical advantages of variable domain systems, RCA discovered that the major was able to convince the major to modify existing equipment.

Simultaneously, dubbing technology was to put the most significant amount of audio content into a movie track to maintain the highest possible sound ratio in a movie theater. Sometimes it was a “corridor” with a loudspeaker in a movie theater—one end of available space. The typical bandwidth of early optical sounds was 100Hz 4kHz (!), But the dynamic range remained almost below 30dB, but later theoretically increased to 30Hz 10K over a decade. .. In fact, in connection with dubbing, the dubbing mixer follows a standard called “Academy Curve” or “Academy Rolloff” developed by MGM’s Head of Sound Douglas Shearer (Norma’s brother). The standard had a 125Hz floor, a high slope above 4kHz, a tailing of 15dB at 8kHz, and an effective ceiling of 9kHz. This was the de facto monaural standard until the introduction of Dolby A in the 1970s.

The dubbing known generally seems to have started around 1930. The movie Applause, directed by Rouben Mamoulian, is widely known as a milestone in the history of sound mixing. When the finishing work experimented with editing all the sounds of two interlocking 35mm tracks, he laid the foundation for traditional film track placement and dubbing techniques. He then tried various photographic processes, such as recording “sound” directly to optical negatives to produce “unrealistic” noise in Dr. Jekyll and Mr. Hyde, and gained the necessary effects.

The first sound mixing console was produced in the early 1930s. It contains four channels, each with an On / Off switch and one rotation fader or potentiometer. There are a total of 8 controls! Dubbing consoles were often two production mixing desks placed adjacent to each other. The synchronization process was initially limited to up to four channels. As part of the standard “preparation for dubbing,” the Dialogue is placed on track one and leaves three tracks that can be shared between music and sound effects if desired. All of Busby Berkeley’s and Astaire/Rogers’s impressive performances in films have turned into situations unimaginable in today’s world. For example, Fred Astaire pre-recorded the tab steps the same way they would run later. (In the UK, the sound effects track is always called “FX,” and in the US, it came to be called “Foley” after Ed Foley of Universal, who is famous for footsteps expert.) Dubbing the mixer, one of their main tasks is the perspective and shot of sound. It was to match the point of view of At the time, there was little you could do other than adjust the level, but many technological advances, especially the advent of dynamic microphones, making it possible to improve the situation within a few years.

Given that John Gilbert’s EQ is possible, chances are his career on the Romeo screen didn’t end in a very nerve-racking way. The optical sound can sound very high quality, and conductor Leopold Stokowski, who recorded the music of Fantasia, insisted on recording the Philadelphia Orchestra on 35mm film even after 1/4 inch film was developed. When the tape became widely available, 8-channel dubbing consoles were standard, and 10 or 12 channels were “special” until the mid-1940s. Despite much research, it is not possible to determine when EQ (equalizer) was first implemented. Judging from the movies at the time, at least Hipass/Lopass filtering must have been a feature of the more oversized mixing desk.

As narration on radio, comics, and other media became more common, narrated performances became more popular with the general public. The exception is, of course, Walt Disney and Mel Blank, who is also a comedian in radio personality. From its versatility, he came to be known as “The Man of 1000 Voices,” and has provided sound to numerous comics produced by Warner Bros. One of the most influential and prolific actors in history may not be well-known to the public, but he is a member of the narration community. Don La Fontaine, who started working on narration in 1962, shows here where he was narrating for a movie trailer. He went up to the malicious position of movie trailers and set the standard for how they would be written and spoken for the next generation.

Even though the smoke voice has grown into a competitive profession, it remains most “back.” Literally and figuratively! Performers filled with leisure time with narration work; it was their “intermediate processing” activity. However, over the last few years, the advent of digitally animated films has brought voiceovers to the forefront, gaining a level of respect previously lacking. Hundreds of celebrities speak out in stunning hits like The Lion King, starring Matthew Broderick with Jeremy Lions and James Earl Jones. Eddie Murphy and Shrek, Liam Neeson and the Narnia series, and more! (See this page for a great collection of voiceover performances.) Audiences are accustomed to seeing famous actors in anime films, which is the companies’ successful marketing technique that makes these films. It has been proven to be.

Voice-over has experienced a resurgence thanks to the hard work of this superstar. This is what millions of curious youngsters want to enter the industry in an ambitious show. Actors of all sizes, shapes, personalities, and abilities can find hard-to-pay employment in the entertainment industry. And it is exciting!

Types of Audiovisual Translation

Dubbing

Dubbing, Adding new conversations and other sounds to the soundtrack of an existing movie are known in the filmmaking industry as post-production. The term “dubbing” is most commonly associated with translating a foreign language film into the target audience’s language. Translating the original lines into a foreign language closely matches the actors’ lips’ movement in the photo when dubbing the foreign language. Dubbed soundtracks are rarely as good as the original foreign language soundtracks, so subtitles can be preferred as a way for viewers to understand foreign film conversations. I will. Technical Reasons Frequently used in the original language version of the soundtrack when dubbing is not needed (Gottlieb & Henrik.1994). Filmmakers often use it to correct flaws that result from filmmaking (actor voices are recorded at the same time as the photo). Synchronized recorded conversations may be blurred or inaudible due to the distance between the actor and the microphone or unexpected air traffic overhead. You may not even be able to hide a microphone that is close enough to catch the actor’s voice in an easy-to-understand manner. Dubbing allows filmmakers to get high-quality conversations, regardless of the actual shooting conditions that existed at filming. Dubbing is sometimes used to add sound effects to improve the original soundtrack. It can also be used as a musical to turn an actor’s voice into a charming voice when the actor is playing a song with the camera.

Filmmakers in some countries use dubbing to provide soundtracks for entire films. This is because this method is popular. After all, it is cheaper and more time-consuming than Synchronized Cinema Togurafi. It must be remembered that even the most experienced filmmaking experts cannot master the film on their own. This is important to remember. This is a time-consuming process and requires the participation of many experts. Filmmakers wishing for a dubbing service should be aware that the first step to success is to work with the appropriate professionals who can help achieve the desired result. However, the following recommendations provide a high-level overview of what the dubbing process looks like.

The requirement for dubbing is to recreate the movie conversation in the native language of the target audience to reach the target audience. After that, of course, you need to translate the script into the target language.

On the other hand, this element of the dubbing process is more complicated than it looks. Why? Timing is essential when it comes to dubbing. The goal when translating a conversation is to be able to match the timing of the conversation or synchronize it with the original language, but this is not always possible. One of the difficulties at this stage of the dubbing process is that what you can say with three words in one language can take six words to speak in another language. However, for the dub to be successful, it must take about the same amount of time as the original language version for the Dialogue translation to take place. As a result, it is essential to have a translation expert who can select speeches faithful to the original language while managing dubbing time requirements.

Another critical aspect of a successful dubbing process is specifying and hiring the appropriate expertise. The fact that many creatives and performers specialize in dubbing for specific customer markets cannot help filmmakers looking for someone who understands the exact requirements of the process. In particular, the dubbing talent must be able to adhere to strict time constraints. When they speak translated Dialogue, they often look at the original performance to the recording studio to make sure they are putting their lines to it. Another factor to consider when hiring talent is to find someone who resembles the original performer’s voice in terms of tone and intonation. And vice versa. It has already been mentioned that dubbing aims to allow people to follow along with the film in the local language. In some cases, it is very advantageous to specify a performer who has characteristics of sound compared to the characters in a movie so that the experience can be enjoyed without being conscious of the effort of dubbing.

At every stage of the dubbing process, you need to apply some level of expertise. However, it is essential that all film production professionals who are engaged, especially about the recording of dubbed language, make every effort to make the recording session a success. Therefore, it is necessary to find the ideal place for recording. While some dubbing or narration artists may have a high-quality recording studio at home, the person in charge of the dubbing process should do their homework and reserve a place that can handle all of your dubbing needs. Often, a professional studio is the best choice. The dubbing process involves many filmmaking professionals, including translation specialists, dubbing talent, and sound professionals. If you need to book a second recording session, you’ll have additional time: cost, energy. As a result, the director needs to communicate with the people with the highest level of experience within the budget to see the effectiveness of dubbing for the first time.

Voice-over

Voice-over, In radio and television productions, film productions, plays, or other presentations (also known as off-camera commentary), the use of a voice that is not part of a story (i.e., non-daily) is a production method. The script is read aloud for the narration, and the voice actors can be professional voice actors or characters who appear in different parts of the production. The narration continues to use the sync dialog. This Dialogue narrates the action being taken concurrently with the narration recording. This is the most common method used for narration. On the other hand, it is also employed in asynchronous films. [2] When used to explain the content of documentaries and news articles, what is usually played back in the film and video footage is pre-recorded audio and video. The use of narration can be viewed in video games or on-hold messages [3] for announcements and information of events and attractions. You can read it at live events such as awards ceremonies. For the sake of clarity, the narration is a feature added to an existing speech and dubbed and not to be confused with voice acting, which is an alternate process for translated Dialogue known as ecstasy.

Ismail (Richard Bass Hart) is the narrator of Herman Melville’s Moby Dick (1956), and, like Joe Girith (William Holden) and Eric Erikson (William Holden) in Sunset Boulevard (1950), occasionally comments on the action of the narration. ) counterfeit traitors (1962); The Great Legacy (1946) Adult Pip (John Mills) 1974 Michael York. Narrative methods are also used to give words or personality to the characters in animation or create interactions between them. Mel Blank Doss Butler, Don Messick, Paul Freeze, and Jun Horay are some of the most famous and talented voice actors working today.

A characteristic evaluation method is adopted as a narration to give a fictional character a personality and a unique voice. There has been some discussion of using characterization strategies as narration, especially when it comes to white radio artists imitating black voice patterns. It facilitated an escape from racial satire as it was a non-confrontational medium in which radio presenters were free to express what they felt was appropriate at the time (Franco et al., 2010). As a result, it has become a preferred medium for voice imitation. The trait has always been prominent in popular culture, including movies and television, and in all forms of media. [4] Radio in the late 1920s began to break away from coverage of music and sporting events. Instead of continuous chat, the Showa virtual storyline began to produce shows and gained popularity. [5] Character casting can be a great way to express yourself creatively in movies and television. Nevertheless, it should be done very carefully.

When a filmmaker uses the sound of a human voice to include images on the screen that are related or unrelated to the words being spoken, this is called a montage. As a result, narration can be used to provide an ironic counterpoint. It can also be on random voices not directly related to the person displayed on the screen. In works of fiction, the narration is often provided by characters reminiscent of their past or characters outside the plot, who generally have a more comprehensive understanding of the events depicted in the film instead of other characters.

Narration is often used to give the audience the illusion that the characters and the omnipotent narrator tell a story. For example, in the TV series The Usual Suspects, the Roger “Verbal” Kint character has an audio commentary sequence when retelling a criminal case. Citizen Kane and The Naked City are two films with a classic voice in film history. At, audio commentary is used to maintain continuity in the edited version of the film to understand better what happened in the time elapsed between the scenes. This turned out to be less than the critical box office success expected by a photo of Joan of Arc (1948) starring Ingrid Bergman, reducing the film to 100 minutes in 145 minutes for the second screening. It was done afterward. Widely distributed editorial books over the years have widely used narration to hide the fact that a significant part of the photo has been deleted. However, in the full-length version, restored in 1998 and made available on DVD in 2004, the narration can only be heard at the opening of each scene. The narration method is particularly relevant to the film noir genre. In the 1940s, first-person narration reached its peak in popularity. In film noir, the male narration was often used, but there are many examples of female narration. In radio, the narration is an essential element of the process of developing a radio broadcast. Audio commentary artists may be hired to give listeners the identity of the broadcaster, enriching or deepening the content of the show. Two British broadcasters, Steve Wright and Kenny Everett, who worked in the 1980s, hired audio commentary artists to create virtual “poses” or studio staff contributing to the program. I believe this idea was valid for a long time before that. Also, American radio presenter Howard Stern similarly used audio commentary.

Non-fiction voiceover can be used in a variety of situations. Television news is a series of video clips of newsworthy events, often presented with narration by reporters, explaining the importance of scenes. It is dotted with newscaster straight-line videos explaining the story without the video being shown. This is done in a variety of ways. Many TV networks programming, including History Channel and Discovery Channel make good use of narration. When Sylvia Vijaguranga offered a narration for the NBC TV show ‘Starting Over,’ the story was told effectively. During sports live broadcasts, sports commentators provide various narration through videos of sports events displayed on TV screens.

Before this, we would do a lot of game show narration, introduce our competitors, and talk about prizes that could be used or awarded. Still, this tactic became less popular as the show’s focus shifted from material rewards to prizes. Don Parjoni Olsen, John Harlan Jay Stewart, Jean Wood, and John Gilbert were all the most prolific of their generation. A frequent aspect of the release of a feature film or documentary on DVD includes commentary by prominent critics, historians, or members of the production team itself.

Subtitling

In today’s world, most video content is captioned or has subtitles included in it. Closed captions were initially intended to assist the deaf and the hearing challenged, but it is no longer the primary goal of the technology. Subtitling is one of the two most prevalent types of audiovisual translation, the other being captioning. Subtitling is generally considered to be a component of the multimedia localization process. In recent years, as audiovisual goods are continually being developed in many regions of the world, this section of the translation industry has experienced significant growth. Additionally, consumers now have their own devices for creating audiovisual content, which has increased the demand for adaptable content.

Subtitles allow us to recreate and express the voices of people and the communication settings in which they speak. It has a significant impact on our society since the audiovisual sector transforms how people communicate, educate themselves, and share knowledge. It can now access various forms of entertainment, including movies, music videos, games, television programs (including documentaries), and much more, thanks to the use of subtitles in this new environment. Consequently, it is altering our consumption behavior as a result of this. The amount of time we spend in front of screens is more than it has ever been. As a result, there is an increasing demand for audiovisual content that has been subtitled. We see captions on the majority of the video content we watch on social media and entertainment platforms. The reason behind this is that, according to many surveys, over 85 percent of people who view videos on Facebook do so with the sound turned off. Captions are springing up all over the place these days.

In today’s world, most video content is captioned or has subtitles included in it. Closed captions were initially intended to assist the deaf and the hearing challenged, but it is no longer the primary goal of the technology. Following the release of the Online Video Forecast 2019 research by Zenith, the average individual will spend 100 minutes per day watching online video in 2021, an increase from 84 minutes per day in 2019.

In the five years between 2013 and 2018, the number of time people spend watching online videos increased at an average annual pace of 32 percent. Increased display size and quality on mobile devices, faster mobile internet connections, and the widespread use of linked television sets contribute to this improvement in the scenario. As predicted by Zenith, advertising expenditure on online video will increase from US$45 billion to US$61 billion by 2021, growing by 18 percent a year on average. This compares to internet advertising spending growing at a rate of 10 percent a year on average. As a result, subtitling is becoming more and more in demand than ever before, and it will continue to grow in popularity. To get the best possible results in subtitling, it is essential to collaborate with experienced translators who are also experts in the field of subtitling.

Errors in Audiovisual Translation Due to Translation. (TV Series Friends)

Anyone who has watched the television sitcom Friends will notice several translation problems, particularly in the subtitles. For example, having many characters in a single line, including unnecessary foreign sense in the subtitles, including interjections, and communicating the incorrect speech style are all examples of grammatical errors. The following are the errors discovered in the iconic comedy Friends, which are detailed in the following paragraphs.

Literal Translation of the Sentences the Majority of the Content

Wherever possible, the literal translation is used in the development of subtitles. However, it may not always be the best option, as in the television series Friends, which is an example of this. It wasn’t a brilliant idea, for example, to translate idioms in their literal sense. Alternatively, given that specific languages use more words to describe ideas than others, it may be necessary to discover the best way to express what is being said while keeping the original as short as possible. Literary translation will not allow us to do so.

An Excessive Number of Characters in a Line

This one goes hand in hand with a literal translation, as was the case with the sitcom, Friends in most cases. When it comes to subtitling, we have to contend with reading speed and screen space limits. Consequently, they should have made an effort not to have overloaded subtitles, making it difficult for viewers to read the text. They could be applying a fundamental principle of subtitling, which is reduction, by refraining from using literal translation in their work. This will allow them to use fewer characters while still contributing to the understanding and delight of the audience.

Keeping a Nonsensical Foreign Sensation

It is necessary to transition from one language and culture to another when translating subtitles and deal with the disparities between them. Keeping parts that are not communicated (unless essential to the issue) will almost certainly confuse. Specific cultural characteristics will likely be lost due to the restrictions listed above and language issues. Instead, translators of the sitcom movie Friends would have to experiment and develop a more straightforward but equally meaningful counterpart in the target language, drawing on their creativity and expertise of the subject matter.

Including Interjections

Because there is a restriction to the number of characters used, interjections take up valuable space. The translators should have avoided translating or inserting statements that were not essential for understanding. Furthermore, the visual and audio assistance for the film Friends is quite beneficial; because the majority of interjections can be recovered from the audiovisual output, there is no need to translate them because we will not be missing anything that would impair comprehension.

Communicating with an Inappropriate Speech Style

Subtitle translation provided the translators with the opportunity to give voice and personality to speakers speaking in other languages; therefore, failure to adopt the appropriate style alters the substance of the speaker’s voice. As a result, there was a lack of coherence between what we saw and heard while viewing the movie and what people read after watching it. A 15-year-old girl should not be represented in the subtitle while listening to an 80-year-old woman speak unless this is truly reflected in the final audiovisual product, which is not the case here.

Creating the Subtitles without the Use of Any Audio or Video Footage

For example, the prior type of error could be a result of the current type of error. Because of the absence of the original audiovisual content, subtitles created by people who are translating without access to the original audiovisual material may contain various errors relating to reference or correctness. When the subtitles are being played back, there may be some disagreement between the captions and the visual information.

The wrong timing always plays the video with the subtitles on to double-check timing and cueing. Subtitles should be introduced after the speaker has begun speaking, not before they have done so. In the sitcom movie, there are instances in which this is the situation. Subtitles that come before or after that are confusing, and they may induce onlookers to believe that something is missing or incorrect with the performance.

More Than Two Lines Used

More than two lines are only used in highly exceptional circumstances. When the text does not hide the image displayed on the screen, more than two lines of text are typically permitted. As a general rule, we utilize two lines, and they should be centered at or near the bottom of the screen.

Problems with Line Breaks

Last but not least, subtitles should be semantically self-contained to function correctly. Line and subtitle breaks will be heavily influenced by the semantics and syntax of the text and the cadence of the speakers’ speaking. Punctuation is also an important consideration; it should be constant throughout the document. Any time a shot is altered, the subtitles should also be altered.

Conclusion

Finally, to be honest, there are no rules that apply in all instances, therefore using common sense, exercising sound judgment, and conducting a comprehensive examination of the content are necessary for the translation creation process. Additionally, it is recommended that the translator double-checks their work. Alternatively, one will very certainly wind up with poor-quality subtitles that contain too many lines, extra spaces, too long subtitles, misspellings, and so on. There are always things that can be done better.