"In the electrifying realm of technology, networking acts as the intricate web stitching our digital world together, where information flows like a vibrant stream connecting devices and minds."
Explore the curated list of free cyber security certifications below:
✅1. Introduction to Cybersecurity: ✅2. Cybersecurity Essentials: ✅3. Networking Essentials: ✅4. Intro to Information Security by Udacity: ✅5. Network Security by Udacity: ✅6. Fortinet FCF, FCA:✨✨ ✅7. Information Security by OpenLearn: ✅8. Network Security by OpenLearn: ✅9. Risk Management by Open Learn: ✅10. Certified in Cybersecurity℠ - CC: ✅11. CCNA Security Courses: ✅12. Network Defense Essentials (NDE): ✅13. Ethical Hacking Essentials (EHE): ✅14. Digital Forensics Essentials (DFE): ✅15. Dark Web, Anonymity, and Cryptocurrency: ✅16. Digital Forensics by Open Learn: ✅17. WS Cloud Certifications (Cybersecurity) : ✅18. Microsoft Learn for Azure: ✅19. Google Cloud Training: ✅20. Android Bug Bounty Hunting: Hunt Like a Rat: ✅21. Vulnerability Management: ✅22. Software Security: ✅23. Developing Secure Software: ✅24. PortSwigger Web Hacking - ✅25. RedTeaming - ✅26. Splunk - ✅27. Secure Software Development - ✅28. Maryland Software Security - ✅29. Stanford Cyber Resiliency - ✅30. Cyber Threat Intelligience -✨ ✅31. ITProTV - FREE preparation Exams IT - ✅32. 15 Free CISA Courses - ☀️☀️☀️☀️☀️☀️☀️☀️ ☀️ Penetration ethical hack ✅ arcx ✅ Training ✅ ITMasters ✅ Penetration Testing ✅ TRAINING COURSES ✅ Introduction to Internet of Things ✅ CYBER TRAINING COURSES ✅ TRAINING COURSES ✅ FREE Cybersecurity Education Courses ✅ Junior Incident Response Analyst ✅ CYBER THREAT INTELLIGENCE 101 ✅ Penetration Testing ✅ TCM SECURITY - Intro to Information Security ✅ GT - Network Security ✅ Ethical Hacking Essentials (EHE) ✅ Responsible Red Teaming ✅ Developing Secure Software (LFD121) ✅ EC-Council MOON Certifications ✅ CCNA SHEET SUMMARY PDF ✅ EC-Council MOON Certifications ✅ HR Fundamentals And Best Practices
The Google Lisbon event covered a wide range of topics that impact technological development. Discussions included the role of ethics, security, resources, and budget in shaping development through Spatial Computing -
evolving VR content and 360°video Ads ; Iris-finger Sensor Technology or voice recognition sdk techniques; Dynamic displays - computer vision, creative studio tools and automated vehicles ; Artificial Intelligence (AI), Machine Learning (ML), Business Intelligence (B.I), Data Science / Big Data, deep learning hardware ( NVIDIA GPU ) and software ;
Additionally, the talks delved into AI-generated art using deep neural networks to replicate, forecast, prevent, and learn generative algorithms with adversarial networks (GANs), recreate or blend styles of artwork ( ex. Botto, AI-DA, Obvious ) ;
Other topics included IoT for Consumer & Commercial vs. Industrial ;
Google Ads & Commerce strategies to Rank Websites, search optimization & bringing in traffic -->
Besides the breakthrough advances in science and technology, such as articulating tools through big data analysis or in combination with machine learning methods, automated cameras or sensors, the discussions also focused on the future of digital security.
Topics such as Biometric identification , the potential risks of facial recognition technology and the need for stronger data protection laws.
These measures not only enhance cybersecurity but also safeguard the user's digital identity tracker .
After some point, everyone needs to be aware that all technology has security risks .
It's difficult to decide which of them is more feasible and secure, since "+" security = "-" privacy;
The talks also touched on Cybernetics & Network infrastructure - , which serve as the backbone for all wired or wireless communication, from Computer Code to Genetic Code including Bio-Engineering, Nanotechnology, IOT architectures ; machine intelligence, synthetic biology or transhumanism concepts - Flipping bits into molecules , shifting atoms - DNA data storage breakthroughs
— From the tactile mirror to the virtual body : mapping territories on packages to people - through media forensics, privacy and security surveillance reform on trust, integrity, reliability-resiliency, authenticity, compliance.
Finally, the event addressed integrating Business, IT , Virtual Infrastructure Software , Cloud Strategies , and Data structures / Algorithms .
The talks emphasized implementing strong encryption and authentication protocols, such as cryptography and data structures encryption - to protect data and ensure that it is only accessible by authorized personnel.
# Update Insights on August 2023
Services transformed how companies do business even after 2017. Soon after Tim Berners-Lee invented Web 1.0, he suggested people could develop the Semantic Web . The Web 2.0 progressed rapidly since 1990s and was largely driven by three core layers of innovation: mobile, social and cloud - It became popular in the mid-2000s with the rise of CSS, JavaScript, HTML file applications along with meteoric social media growth and data centers located around the world. Today, Users rely on phones, laptops, PCs, tablets, headsets, and wearables to access information daily on both Web 2.0 and Web.3.0 . Web 1.0 first-generational capabilities merely provided information to users. Browsers could only read data on a static web page without any interactivity or searchability. Web 2.0 changed this by providing significant interactivity, where users could contribute to online knowledge without any technical understanding. Web 3.0 has four foundational pillars: semantic markup, blockchain and cryptocurrency , 3D visualisation, and artificial intelligence while using the same devices as Web 2.0 and continuing to make information resources available to the world with real-time data monitoring, tracking, and immersion. It could also perform new interactions like searches, filtering results, making data entries, and others. The process of social connectedness - triggered by social media / and mobile phones browser / app development - moved from “attract” to “extract” their users; consequently, Individuals or businesses began to suffer via higher fees or platform safety risk on Cloud types. Though as-a-service types are growing by the day, there are usually three models of cloud service to compare applications: Software as a Service (SaaS) vs (IaaS) Infrastructure as a Service vs (PaaS) Platform as a Service ---> but there are also other public or private services, such as Desktop as a service (DaaS) and Functions as a Service (Faas) or even XaaS (anything as a service — that are all integrated with the on-premises IT infrastructure to create a cohesive ecosystem called the hybrid cloud of embedded systems ;
The event held the debate on major technological forces, currently driving to digital disruption on the medium cloud, social Movements on mobile, Big Data, while IoT are Transforming Physical Security in the Digital Age . It´s not just Netflix or Facebook’s that feed drives views, privileging the stimulus–response loop recommendation system / optimizing for engagement as a key interface for content consumption. Yet this same design has been criticized for leading users towards more extreme content, other online platforms like YouTube, shape users’ world views considering new rules for the artificial intelligence programs, spreading malicious content. ---------> "What Are The Alternatives to the Danger of engagement-focused algorithms - that are Hacking the Brain with deep psychology stimulation inside preferences (neuromodulation "tuned" to activate) through “emotional surveillance technology”? : In a world where Citizens are not products, clients or customers - rather reshape public human rights , and the Economy represents a tool we humans invented - like democracy and politics - to help govern our relationships to adapt between each other - ourselves with nature and the world we live in. If these tools aren't getting the outcomes that make us happy, safe, healthy, better educated and protecting / preparing our country for an increasingly uncertain future , as quality of life is stagnating; unfair , jobs and health education systems, regardless of how much money you have or where you live; while our environment is suffering, then it's time our economic tools and practices change to embrace transformative policies that allow the data analyze to improve future business decisions and reprioritise our investments. This is where the rights approach to ethical decision making comes in, although fairness can be subject to different definitions across divergent languages, cultures and political systems : “stipulates that the best ethical action is that which protects the ethical rights of those who are affected by the action. It emphasizes the belief that all humans have a right to dignity” (Bonde and Firenze, A Framework for Ethical Decision Making). Technologies appear to achieve the same goal: to make the city more efficient, connectivity, and social harmony from a “healthy body” into an “efficient machine - signaling technologies that transformed the city into a programmed and programmable entity, a machinery, whose behavior can be predicted, controlled and modulated according to the principles established by some well- intentioned technocrat . However, the use of AI within many industries - retail to supply chain - from banking to finance, manufacturing to operations - security predictive artificial intelligence (AI) threat detection ( Identifies and blocks deep-learning based malware detection ) or deep learning acceleration built directly into the chip, Intel® hardware (-architecture, accelerators, memory, storage, software, security -) have all changed the way industries operate. In social media environments - digital marketers, created a new way to connect and engage with the target audience or media marketing performance. In oposition, it raised ethical concerns and eventually carries the risk of attracting consumers’ distrust - extracting harmful marketing appeals, lack of transparency, information leakage and identity thef. Developing AI solutions should consider human rights, fairness inclusion, employment and equality that can lead to potential gains in credibility for products and brands - ensure brand safety and protect consumers from fraud and the dissemination of fake information, thus increasing customer trust towards brands. Recognizing the value of sensitive data and the harm that could be caused if certain data were to fall into the hands of the wrong parties, many governments and industries have established laws and compliance standards by which sensitive data must be Pseudonymized or Anonymized. Europe is putting pressure on internet companies like Facebook and Google to safeguard against hate speech, trolling, cyber-bullying, fake news, sex traffickers online and terrorist activities online. The GDPR (general data protection regulations act - on MAY 4 2020) passed by the parliament of the EU aims to safeguard the data privacy rights of its citizens. While the act combined with the EU court’s “Right to be forgotten” judgment has set precedence in the way companies handle the data of their consumers. Individuals now have the "right of data portability", the "right of data access" along with the "right to be forgotten" and can withdraw their consent whenever they want , as well as intrusive online brand presence.
-- Considering the conscious principles of compromise, chain potentially relevant questions about General Data Protection Regulations :: " How are organisations ensuring that the content posted by staff and consumers does not compromise the ethical principles of the brand - managing their social media presence in line with data protection, visual misinformation and privacy regulations ? ; What do you need to protect : on whatever occasion the adversary gains access to information that is sensitive to you? What are the risks of compromise and how to mitigate them ? What practices and mechanisms can enable firms to cultivate an ethical culture of AI use / How can digital marketing professionals ensure that they utilize AI to deliver value to the target customers with an ethical mindset? "
- --> The importance of getting interested in areas of privacy law, digital legislation and regulation of AI ethics, is recognizing digital literacy education that serves to take responsibility in civics and citizenship towards the environmental impacts of human-machine technology relationships, but also protect and question oneself values critically, bridging the discursive gap between policy text and practice into a re-formed conceptions of learning, creativity and identity in the new machine age. Examples of Seeking capital - information, social, and cultural Individuals - are applied whenever companies join uses of artificial intelligence for recruitment or machine-learning systems like In the process of seeking various types of capital through digital marketing platforms, consumers experience both positive (benefits) and negative (costs ) effects. Also, Filtering Perception and Awareness ( munitions of the mind) - starting with stone monuments, coins, broadsheets, paintings and pamphlets, posters, radio, film, television, computers and satellite communications - have been present throughout history, as propaganda has had access to ever more complex and versatile media. The velocity of information flow, volume of information shared, network clusters and cross-posts on different social media may be analyzed and compared for negative and positive electronic word-of-mouth. These intra-interaction consequences such as consumers’ cognitive, emotional, and behavioral engagement with the brand thus trigger extra-interaction consequences of brand trust and attitude thus developing brand equity through the DCM strategy. While it articulates and builds the digitally enabled capabilities required to transform their linear supply chains into connected, intelligent, scalable, customizable, and nimble digital supply networks through Synchronized planning, Intelligent supply Smart operations Dynamic fulfillment -- Digital development supply networks (Hyper-connectivity, Social networking, Cognitive computing where Matlab /excel and python can transform raw sound data into numeric data for machine learning - training accuracy for deep Learning Models - "This means that if your data contains categorical data, you must encode it to numbers before you can fit and evaluate a model. The two most popular techniques are an integer encoding and a one hot encoding, although a newer technique called learned embedding may provide a useful middle ground between these two methods." Cloud computing combined with software-as-a-service (SaaS) delivery models , 3D printing. The use of customer analytics to make smarter business decisions that generate more loyal customers. ensure how customers are having positive experiences with the company at all levels, including initial brand awareness and loyalty - crucial to the success of any business. This often leads to confusion ( 'discursive gap' ) about when and how to deploy what information technology, to maximize value creation opportunities during stages of the customer journey - usually questioning : " What is the interplay between customer traits (e.g. innovativeness, brand involvement, technology readiness) and attributes of technological platforms in this process? What firm capabilities are required to capture, manage and exploit these innovation opportunities from customers to gain a deeper understanding of them?" ---- Since There are different Types of Data: Nominal, Ordinal, Discrete, Continuous, Interval and Ratio scale -- The Netflix’s dynamic optimizer example, attempts to improve the quality of compressed video, but gathers data - initially from human viewers - and uses it to train an algorithm to make future choices about video transmission, with the aim to deliver personalized and targeted experiences to multiscreen audiences to keep them coming back - . Netflix’s algorithm library is vast but much of the content is geographically restricted due to copyright agreements - - the movies and TV shows are limited by the country. Whenever traveling abroad you may need a VPN to securely access your usual home streaming services. Because not all Netflix shows are available worldwide, many of its subscribers turn to VPNs that disguise their location and fool the streaming service into offering them a content catalog for a different region but Netflix algorithms ban most of them. Not all VPNs work with Netflix. Due to rapid growth of the digital devices and their access to the Internet caused security threats to user data; while advance measures have been adapted by the attackers, security and privacy threats has become more and more sophisticated day by day, increasing the demand for an updated technical Skills and highly secure medium to secure entities and their valuable information into the Internet. "Netflix’s machine learning algorithms are driven by business needs."![]() |
AI is progressing in Broadcast & Media, through some mainstream applications , to uncover patterns that aren’t always intuitive to human perception and able to change consumer behaviours - the two most viewer-centric applications would be on content discovery and content personalization : Netflix’s new AI tweaks each scene individually to make video look good even on slow internet - It also tracks the movies we watch, our searches, ratings, when we watch, where we watch, what devices we use, and more. In addition to machine data, Netflix algorithms churn through massive amounts of movie data that is derived from large groups of trained movie taggers ; Google Is Using Artificial Intelligence to Make Sure YouTube Content Safer for Brands . It uses Deep learning, to build artificial neural networks to mimic the way organic(living) brains sort or process information, applying AI in a number of areas. |
![]() | There are dozens of reporting features and metrics to review web services. With Search Console you can monitor, maintain, and troubleshoot your presence in Google’s Search results and be aware of any warnings or issues that come directly from Google. On a technical level, Google Analytics works through JavaScript tags that run in your website’s source code and is usually operated with Google Tag Manager - – these JavaScript tags running Google Analytics set cookies on their browsers that harvest personal and sometimes sensitive data from them in return. The question arises : " Is Google Analytics GDPR compliant to use? How do you balance Google Analytics, cookies and end-user consent on your website?" Google Tag Manager is a hugely popular tool for websites of any size and shape. It organizes all third-party tags on your website (like Google Analytics or Facebook pixels), and it also controls when these are triggered. Important for website owners to know, is that almost all of such “third party tags” will set cookies that, according to EU law (the GDPR), fall into categories that require the explicit prior consent of your users. In other words, tags are what happens, while triggers are when what happens. Inceptionv3 is a convolutional neural network for assisting in image analysis and object detection, and got its start as a module for Googlenet — especially shader programming in the Graphics Library Shader Language (GLSL). RESEARCH COLLECTION | 2020 : Connect Google Analytics to Google Data Studio |
Onwards — The next era of spatial computing and how Google allows us to experience 3D & augmented reality in Search : the user experience is still a primary obstacle for AR mass adoption and the biggest obstacle for VR mass adoption too ; as it is gradually gaining influence on automobile industry - In the future , people will have access to information via glasses, lenses or other mobile devices ; autonomous vehicles, drones and robots move freely environments - understanding where they are; where they are going and what is around them -. By solving the problem of inaccurate positioning from GPS to camera-enabled Scape’s VPS long term vision, many of the applications once imagined by AR developers, are now a reality and It's expected to AR revenues surpass VR revenues by 2020 - Knowadays , almost everyone owns a cellphone. Plus, mobile phones have upgraded to the required hardware for AR technology including CPU, sensors, and GPU - enabling infrastructure for a vast array of new spatial computing services , accelerated by the imminent arrival of widespread 5G networking and edge compute, delivering massive bandwidth at extremely low latency .
RESEARCH COLLECTION | 2020 - Web VR Experiments with Google
LEARNED LESSONS | 2020: Estruturas WebXR
"Customer experience with Digital Content refers to a customer’s perception of their interactive and integrative participation with a brand’s content in any digital media. " - (Judy & Bather, 2019) -
In addition to adding Augmented Reality to the product value, Microsoft has been offering MSOffice applications for its HoloLens device and showing what future offices can look like without screens and hardware. This could also point to new virtual competitors. AR apps can serve as a further direct-to-consumer channel. Some unanswered questions that are both theoretically and managerially relevant are: "How does it impact consumer-brand relationships, for instance, if consumers 3D-scan branded products and replicate them as holograms? How do consumers interact with virtual products in their perceived real world, compared to real products - what advantages and disadvantages do consumers see ? Which dynamic capabilities drive the success of Augmented Reality Marketing? Which competencies do Augmented Reality marketers need? How should these requirements be integrated into digital marketing curricula to lead for better decisions and lower return rates? How should Augmented Reality Marketing be organized and implemented - How does good content marketing or good storytelling - inspirational user experiences - are organized? What drives the adoption of Augmented Reality? What advantages and disadvantages do consumers see in virtual versus real products ? How can the success of Augmented Reality Marketing be measured?"_______________ At the End of the Event I suggested Filipe Barroso
( responsible for organizing the Lisbon Google Developer Groups Event ) that it would be invaluable to get in touch with programming schools like ETIC that we all could engage into future educational workshops together - intersecting areas and interact with the events. For a person who is learning it was important to interconnect: students expressed opened to initiatives that included group and teamwork contexts - sharing knowledge and opportunities to grow - .When I look back to this event, even though I wasn't totally prepared to understand some concepts, it did make sense later. This is the process of knowledge: to realize that even if something does not make sense back then , it will eventually connect in the Future. There are alternative ways to connect - You don’t have to follow traditional advice or go to events to successfully build and maintain a valuable network. Most are mixing bowls for professionals who are there for different reasons. While attending , I realized how Millennium Professionals don't like the idea of meetings, but contradictorily waste amounts of time on more expensive events without a good return on their investment of time and money. An activity should be meant to increase the value of your network and/or the value you contribute to it. Proper networking is about building new relationships and deepening your existing ones.
Communication in teams is equivalent to the neural network of the human body. Technology that supports collective interaction include online discussion boards and mailing list. So even after the google event, I opened a channel online - on our Computer Science SLACK called "eventos_tech" - the virtual space, where I shared all that I learnt in group, as an incentive to my colleagues sensibilize towards the importance of exchanging knowledge and being there to help the other - the notion of shared workshops and events on tech , creates or motivates towards other bigger challenges. It was also important to understand how Thee lack of an adequate project scope to contextualize the project so that it does not become dispersed or mispercept by teamwork members and even future clients; underestimating the time and effort required to deliver a task can turn a challenging project into a hellish project. Without clarity and vision we’re unfocused, going nowhere fast.> ------> Google Developer Groups (GDGs) are for developers who are interested in Google's developer technology; everything from the Android, Chrome, Drive, and Google Cloud platforms, to product APIs like the Cast API, Maps API, and YouTube API.
As the world evolves, businesses struggle to stay up-to-date with new technologies, market trends, and consumer behavior to remain competitive. This can be challenging, as businesses must continually invest in research and development, training, and innovation to keep up with the changing landscape. An adaptive mindset is essential for confronting these challenges directly. Acknowledging opportunities and managing risks are essential for sustaining long-term success, influencing both individuals and entrenched paradigms, which can be resistant to change. Change management, applicable across diverse organizational contexts, demands a structured approach akin to project management, minimizing errors and facilitating seamless transitions. The intricacies of decision-making and strategy formulation are amplified during transitional phases, highlighting the significance of comprehensive analysis. In the contemporary landscape, organizations must prioritize continuous learning, foster collaborative teamwork, and enhance communication strategies to adeptly respond to societal shifts. Achieving results hinges on engaging stakeholders and fostering a culture of collaboration, rather than succumbing to self-interest or competitive isolation. Common catalysts for change encompass technological advancements, process enhancements, crises, evolving consumer preferences, and external pressures such as market entrants, acquisitions, mergers, and organizational restructuring. These catalysts invariably impact individuals, challenging established norms and displacing comfort zones. Proficiency in Organizational Change Management is indispensable across various professional domains, spanning leaders, team members, project managers, IT specialists, HR professionals, and beyond. Integrating change management with project management is pivotal for seamless execution and optimal outcomes. Treating change initiatives as social endeavors rather than top-down mandates enhances their effectiveness and acceptance. Leveraging data to identify patterns of high performance aids in pinpointing communication barriers, inefficiencies in team structure, and spatial configurations conducive to knowledge sharing. The integration of risk management and compliance cultivates a culture of ongoing enhancement, reducing deviations from standards while optimizing resource utilization. Organizational Change or Transformation Management entails crafting robust strategies and predictive models informed by data analytics to avert false starts and optimize outcomes. With an adaptive mindset and a structured change management framework, businesses can navigate transitions successfully, realizing their envisioned objectives.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
📅 Blends of Data to Gain Better Insight - For data travelers , knowledge explorers and hungry network-loving life-forms - Who's Encountering, Experiencing and Exploring Nature - like to get to the bottom of things .
As technology advances, it's important to consider not only its potential benefits but also its ethical implications, security risks, and privacy concerns. The hybridization of technology creates challenges and opportunities for digital content and services - particularly in the areas of mobile, IIOT / IoT, and wearable devices:
Challenges :
Fragmentation: the wide variety of devices and platforms available can lead to fragmentation issues, where content and services may not work correctly or consistently across all of them.
Security: the interconnected nature of internet-enabled devices increases the likelihood of security breaches, underscoring the importance of implementing measures to safeguard against such threats.
Compatibility: It's a challenge to ensure that content and services are compatible with a wide range of hardware and software configurations, which can vary greatly among different devices and platforms.
Complexity: developing applications for hybrid technologies, which combine multiple platforms or devices, can be more complex than creating software for a single platform or device due to the need to consider and integrate various different technologies.
Inequalities: certain groups may not have access to the latest devices or may not be able to afford the cost of upgrading their devices, which could limit their access to digital content and services.
Opportunities :
Reach : the proliferation of mobile, IIoT/IoT, and wearable devices offer new opportunities for creating personalized experiences for users, tailoring content and services to their specific needs and preferences.
Personalization: devices offer new opportunities for creating personalized experiences for users, tailoring content and services to their specific needs and preferences.
Innovation / New business models : combination of different technologies and devices can inspire new ideas and approaches to content and service development such as subscription services, microtransactions, and pay-per-use models, that can generate revenue streams and increase profitabilit.
Accessibility: Companies can expand their reach to new audiences and improve accessibility by offering their content and services on multiple platforms and devices.
Digital content requires compatible data mediums for different hardware upgrades. High-end 3D software requires high-performance hardware, including faster processors, large memory capacities, and high-speed networks. Developers must optimize and secure this hardware, despite physical constraints in chip manufacturing and battery life. Integrating new software or replatforming existing software can be difficult due to scalability, connectivity, interoperability, and limited third-party library support. However, these changes can offer benefits such as shorter development time, exposure to more users, backups, synchronization, and performance updates. Developers must carefully consider all factors when integrating new technology or replatforming existing software to ensure success. This includes balancing technical challenges, compatibility with existing systems, and implementing cybersecurity measures. In an environment of thin margins, the wrong technology investments can have long-lasting strategic and financial implications. Therefore, it is essential to conduct thorough project scoping, accurately estimate the required resources and avoid underestimating the resources required. Additionally, embracing digital transformation and self-organization can foster adaptation and drive innovation, leading to valuable insights from unexpected sources.
Questions arise: " How do organisations articulate and realize their social media marketing objectives? What are their objectives, e.g. brand building, attracting advocates, increasing sales, enhancing visibility, cultivating community communication? Are objective short-term or long-term? "; How can client firms work effectively with their portfolio of marketing agencies to integrate SMM into the wider marketing communications? How is campaign success measured? Can different styles of communication be characterized, and if so, are some more successful for marketing than others? How can it contribute to the authenticity and trust that consumers place in the brand? What factors affect the success and impact of endorsement?
Technology has a direct impact on the company’s ability to keep production moving efficiently. Even the best architecture, the most perfectly suited for the job, will be essentially useless if the people who need to use it do not know what it is; cannot understand it well enough to use, build, or modify it; or (worst of all) misunderstand and apply it incorrectly. And all of the effort, analysis, hard work, and insightful design on the part of the architecture team will have been wasted. Creating an architecture isn’t enough. It has to be communicated in a way to let its stakeholders use it properly to do their jobs. If you go to the trouble of creating a strong architecture, one that you expect to stand the test of time, then you must go to the trouble of describing it in enough detail, without ambiguity, and organizing it so that others can quickly find and update needed information. Documentation speaks for the software architect : it is very difficult to determine the architectural characteristics of the application without fully understanding the iner-working of every component and module in the system. Basic questions about deployment and maintenance are hard to answer: Does the architecture scale? What are the performance characteristics of the application? How easily does the application respond to change? What are the deployment characteristics of the application? How responsive is the architecture? - When choosing technologies, budget should be considered in terms of both cost and potential return on investment, including scalability, profitability, and efficiency. Scalability is particularly important, as critical applications must be able to handle increasing traffic and data demands without compromising availability. In addition to selecting the right technology, it's important to ensure that stakeholders understand it well enough to use, build, and modify it effectively. This requires clear documentation and communication to avoid misunderstandings and ensure that the architecture can stand the test of time. Choosing the right programming language and resources to develop a program depends on several factors such as the nature of the program, the skillset of the developers, the scalability and efficiency requirements, and the budget. It's important to consider the long-term impact of the technology choice and its potential to yield a positive return on investment. When it comes to scaling critical applications, it's important to design the architecture and infrastructure with scalability and availability in mind. This involves using technologies and practices such as load balancing, caching, clustering, and fault tolerance. The architecture should be designed to handle increasing traffic and data demands without compromising the quality of the user experience. Another important factor is ensuring that the human resources involved in the development and maintenance of the application have the necessary skills and knowledge to handle the technology stack. This may require training, hiring new talent, or outsourcing certain aspects of the development process. Overall, the choice of technology and resources should be made with a focus on maximizing efficiency, scalability, and profitability, while also ensuring the quality and reliability of the application. While small business owners understand the value of new technologies, I see how they still struggle with choosing the right products, as well as the right time to adopt them to have the greatest impact on their business. How do you choose the language with which to write your program or choose resources ( human and technological) ? Budget should consider two key things: cost of the technology and implementation, yield in a concrete return on investment. ( scalability / profitability and efficiency). Companies struggle to scale critical applications. As traffic volume and data demands increase, these applications become more complicated , exposing risks and compromising availability : applications that can handle huge quantities of traffic, data, and demand without affecting the quality the customers expect ; In order to prevent an application from becoming slow, inconsistent, or downright unavailable as it grows, scaling isn’t just about handling more users; it’s also about managing risk and ensuring availability. Different software packages offer diverse approaches to graphic design as well as some of them being free and some of them being paid for, Most of the programming languages are open enough to allow to do things multiple ways , for a similar outcome. For instance, the implementation of XR Hardware requirements may vary according to network latency, speed of CPU colors, use of proxies, among other factors. The Problem of PC, Mac, and Linux headsets - is related to Technical Specifications : It's well known that PC type headsets will perform based on how well your PC performs. This is where a Mac has a bit of a disadvantage; you need a fast video card, and Mac's typically are fast enough for graphics and some game play, but not for VR. Apple PCs are just adding the capability to add an external video card for VR, as most of them simply do not have the video processing capability to render VR at the resolutions used by the Vive and Rift headsets. However, Apple has introduced VR ready PCs that are most likely to be performance stable with mixed-reality or 360°video . PC hardware does give a better VR experience. To avoid VR sickness, you need a fast frame rate - This is how fast your computer can generate the images on the screen. A lot depends on the complexity of the scene. Mobile controllers are only 3 Degrees of Freedom ( DOF ). This means they track tilt, yaw, and roll, but not position; if you move the controller flat to your left, on the game your controller hasn't moved at all. This is why you can't grab things with a mobile controller. The Vive and Rift Oculus both have 6 DOF controllers, so you can move them around and grab things. If you wonder what is the optimal resolution that yields the highest clarity in VR360: The final clarity in VR360 is up to a number of factors, from the quality of source video resolution, fps frame rate , Mbps bit rate, dynamic range, compression ,rendering pipeline; Latency , screen picture resolution and structure. The reason why its so important to document on the Integration testing with a chart benchmarking toolkit - and why you need 8K 360 videos for VR headset like Oculus Go, Vive Focus and Samsung Gear with S8 or S9 with Snapdragon chips on a 3k screen ; Understand when to use simple sampling or multiple sampling, and how it will affect your video quality.
OPINION: Six Things Brands And Agencies Need To Know Before Making Augmented Reality Campaigns .
OPINION | 2020: : Is virtual reality the next channel for digital marketers?.
RESEARCH | 2020: Novel math could bring machine learning to the next level
Althought, High-quality Images can make a website stand out, unfortunately it comes with a price: Due to large file sizes they’re bulky to download and result in slow page load times. If you’ve ever been on a device with a poor network connection, you’ll know how frustrating this experience can be. The image format WebP - developed by the Google team - is a solution , with 26% smaller than PNG images and around 25–34% smallerthan JPEG images - A decent savings, where the image quality isn’t noticeably affected . Another image format that will become usefull in future browsers is Scalable Vector Graphics (SVG) - It's not yet an universally supported format , but it is very powerful. Unlike the other image formats, SVG is vector based, which means it is totally scalable without quality loss. You can reduce JPEG, GIF, or PNG in size, but when you artificially make them bigger they lose quality and appear pixelated.
Over time, 3D game development languages have evolved to meet the demands of VR, which emphasizes physical immersion and requires a high frame rate and resolution. To achieve this, VR programming is typically done in high-speed, low-level languages like C++ (for Unreal and CryEngine) or C# (for Unity Engine), which can generate 90 frames per second. Modern engines handle tasks such as rendering, physics, terrain, lighting, and AI to create realistic and sophisticated virtual environments, as well as networking to build multi-user experiences that can run on mobile platforms. To ensure smooth VR experiences, it's crucial to have up-to-date hardware since low frame rates and stuttering can cause cybersickness. As developers continually push the limits of acceleration, FOV, refresh rates, and FPS, investing in expensive VR PCs becomes necessary. Even factors such as user height can contribute to a nauseating VR experience. To explore the latest VR games without experiencing cybersickness, it's recommended to have the latest CPUs, GPUs, and RAM. Nvidia GPU users can use the GeForce Experience app to optimize their PC for each game, while AMD GPU users can use the AMD Gaming Evolved app for the same purpose. Additionally, not everyone can afford to invest in expensive VR hardware, and it is important to make VR accessible to as many people as possible.What Makes VR Interesting? - It offers a level of immersion and interactivity that is unparalleled by any other medium. VR can transport users to virtual worlds, allowing them to explore, interact with objects, and even engage with other people in ways that feel almost as real as the physical world. VR also has the potential to be used in a variety of fields, including entertainment, education, healthcare, and more. It can provide new ways of experiencing and understanding complex data, simulating real-world scenarios, and training individuals for high-risk or complex jobs. Additionally, as technology continues to advance, VR is becoming more accessible and affordable, making it an increasingly popular and exciting area of innovation. “A medium composed of interactive computer simulations that sense the participant’s position and actions, vand replace or augment the feedback to one or more senses, giving the feeling of being mentally immersed or being “present” in the simulation” [Sherman and Craig, 2018]. - virtual reality is a medium, which means it is a means of conveying ideas from person to person, or people to people: from creators to recipients. Furthermore, the medium itself is a filter on concepts as they are conveyed, and thus has a higher, over-arching influence on consumers of the medium — “the medium is the message” as McLuhan has proclaimed [McLuhan, 1964].
ARTICLE | 2020 : WHAT IS VIRTUAL REALITY SICKNESS?
Simulator sickness, also known as VR sickness or cyber sickness, can produce a variety of symptoms including sweating, nausea, headaches, and drowsiness, which are similar to the symptoms experienced by motion sickness sufferers in cars, boats, or planes. The prevailing theory on the cause of VR sickness suggests that it results from a mismatch between a viewer's actual experience and what their brain perceives. According to Kolansinski's (1995) Conflict theory, the brain perceives that the input (such as sight or sound) is not genuine or accurate. Normally, the mind detects when something is wrong with the body and attempts to correct it by performing safety checks to ensure that the body is in a healthy state. The detection system searches for cues or hints of abnormalities, such as linking movement and sight to compile a body of information that can verify the accuracy of the experience. When the cues do not align, the brain goes into a defensive mode that is typically triggered when the body is under attack. A rejection system is activated, as the brain attempts to remove any perceived poisons or threats from the body. Stable frame rate is essential for VR content - as it’s rendered twice - it needs to run at 90 frames per second instead of 60: you may need to double the amount of objects to render and choose 150% faster than traditional PC games ; even the smallest hiccup can cause an uncomfortable feeling for the player : * Before optimizing your Unity projects, it is crucial to identify the areas where optimization is required. Unity provides built-in analysis tools like the profiler and frame debugger, which can help you understand why your game or experience is taxing the CPU, GPU, and RAM. * To improve performance or quality, adjust the render scale during runtime by following these steps: First, select Play Area under [VRTK_SDK_MANAGER] and [VRTK_SETUP]. Then, add a VRTK_Adaptive Quality component and set the Scale Render Viewport's minimum to 0.5 and maximum to 4. This component automatically adjusts the viewport's resolution based on the performance level, decreasing it when necessary and increasing it when the machine has sufficient processing power. * * To reduce the number of draw calls, utilize the single pass stereo rendering technique which improves upon regular single pass rendering. By using GPU Instancing, it can cut down on the number of draw calls needed. This technique is currently only available on Windows 10. To enable it, go to Player Settings by selecting Edit ▸ Project Settings ▸ Player from the top menu and access the XR Settings. From there, select Single Pass Instanced (Preview) in the Stereo Rendering Method drop-down to switch rendering techniques. * Unity uses a method called multi-pass rendering to display VR content, which means rendering the full view twice for each eye, resulting in a significant performance impact as it doubles the number of vertices processed. However, since version 5.6, Unity has introduced more efficient techniques such as double-wide rendering. This method allows Unity to render both left and right eye images simultaneously into one texture, reducing the workload on the GPU and saving on CPU and GPU time. As a result, using this technique can improve performance by around 20% less CPU time and 10% less GPU time. * To achieve a smooth framerate in VR, it is crucial to optimize the geometry, textures, and materials. Quality textures are more important than the number of polygons displayed. Lossless formats such as PNG or BMP are recommended since Unity handles compression for the output platform. Mesh Baker automates this process. For efficient culling, small, static level geometry should be separated into different GameObjects. To optimize culling, open the Occlusion window by selecting Window ▸ Rendering ▸ Occlusion Culling from the top menu and adjust the Smallest Occluder value to 2 in the Bake tab. Enabling GPU Instancing on materials can significantly improve performance by allowing Unity to render multiple copies of the same mesh at once, which can save thousands of draw calls. * To avoid VR sickness, use these technical options: (1) Limit movement speed or use teleportation instead of fast movement. (2) Fade to black and teleport players to the ground instead of showing free falls. (3) Opt for natural methods of locomotion, like hand swinging or teleportation, instead of direct movement controlled by a joystick. (4) Fix performance issues and use automatic resolution and effect adjustments. To make players feel comfortable, optimize performance - attending the CPU and GPU cycles, use fade or cut to black when teleporting, add a vignette around the view's corners when moving fast, put players in a vehicle cockpit, and allow for direct movement because of the way human brains work, it’s natural to move and rotate quickly when you’re inside a vehicle, and you can allow direct movement.Thinking about a Cross-Platform Problem : As leading mobile platforms like Apple and Google are continuing delivering new and improved AR toolkits for developers: Google offered ARCore for Android developers, while the iOS platform also offered ARKit loaded with enhanced AR-based features for building immersive AR applications. These are example of platform-specific that blocked scaling compatibility between different models and don’t allow cross-platform rendering.
Thinking about a A Cross-Platform Solution : Computer-mediated reality Cross-Platform Library [ ViroReact ] , - npm install -g react-vr-cli - the Cross-Platform Library for React sollution - Is a free and open-source for an immersive experience, that offers effective documentation on Native AR Apps, that can utilize a single code base for developing and deploying AR/VR features on both iOS and Android versions of the app. React gives support for both front-end and server-side ( high efficiency, reusability,higher speed, agility and responsiveness of web app, friendly user experience to hybrid space ).
React VR apps are written in JSX (JavaScript eXtension ) - a syntax that allows HTML-like tags to be mixed into JavaScript code. React VR is based on React and React Native. A-Frame apps use HTML, with custom HTML tags - a powerful framework, providing a declarative, composable, reusable entity-component structure for three.js. A-Frame can be used from HTML, although developers still have access to JavaScript, DOM APIs, three.js, WebVR, and WebGL. Both of them allow for custom JavaScript code and interfacing directly to three.js and WebGL.Over time, developers are considering blockchain based systems for the future - alternative systems, allowing information storage; their advantages and disadvantages, as well as determine where it is better to use and where alternatives may offer a better solution. It will play a central role by capturing user activities at the conclusion of a track play - analyzing the play for legitimacy or fraud detection, then applying a series of network and individual influenced coefficients. This will derive a reward value using the credit system within Current's platform. The question remains: "why would a developer want to build on EOS and blockchain in general vs the cloud or some other centralized alternative?" The main objective of the Current protocol is to facilitate transfers of value between media services by partnering with media networks. Due to the increased Video traffic that will account for 82% of all Internet traffic by 2022 , the 4k / 8k and 360 video format require more transcoding space for processing power in a better , cheaper, faster and more optimized way, throughout a strategic transformation of technology . To enable content owners to deliver ultra-low latency, high quality video content – and reduce their reliance on complex and costly transcoding services, cloud storage providers, and aggregators, Eluvio has Launched Content Fabric for Internet Video Distribution.
Throughout professional practice - I encounterd new challenges while editing, transcoding, decoding, rendering, and delivering tv broadcast video content - Assuming the importance of Containers as it hold metadata on media in the file. That metadata can be as simple as the frame rate of the video ; to showing what camera and lens were used to record the footage; what settings were applied; where it was shot and information about the specifications of shot and the production. The metadata within a container can sometimes also tell what standards the footage was produced in and complement the camera shot list / shot designer : storyboard - planning documentation.
Tracking and sorting the RAW and Log video format is the alignment made before Rough Cut and processing the color correction . These are necessary steps to neutralize the image profile, even before it actually begins editing the colors with color grading. It will deliver much more time in the grading steps. The important thing is to identify the file information and the type of compression and codec you use. So you can know in advance the decision making between: if you will be able to work enough grading or if it will be limited by technical specifications. The question about “ What people look for in video editors ?” usually resumes in: - Lift-Gamma Gain; Shadows - Midtones - Highlights ; Blacks - Mids - Whites. By adjusting the blacks first you set your baseline to adjust the rest of your image, you will notice that adjusting the black's and whites will affect your waveform as a whole, so it will be a matter of balance finding the right point between the two. Midtones don't affect blacks and whites, so you leave them last. If you climb the midtones will lose saturation in general in the image, then to compensate, increase the saturation a little to keep the colors pulsating. A well exposed skin usually gets your IRE between 60 and 70 in the waveform, be careful to "pull" the midtones a lot, because they usually bring a lot of noise to the image, but as in every variable in the cinema, the more knowledge and experience you have towards photography, the easier it will be to opt for a slightly higher or lower IRE depending on what search or even the camera and format in which it is Recording. This area covers several others, interspersed in post-production, and talking directly to the narrative desires of the director and director of photography. The colorist is a technical artist of the film and the marriage between his knowledge of formats, displays and scopes intersperses with his artistic ability to accentuate the tone of the scene. Color correction - is the technical and mechanical process. The use of scopes is essential (Waveform, Vectorscope, Parade). Even if you don't have a calibrated monitor, which can be very expensive, if you trust your scopes, and work right with them, you can guarantee a great technical result. Waverform gives you all the information about the luminance in the scene, Vectorscope gives you the information about chrominance and Parade shows you the red, green and blue values separately. The entire video clip receives manual adjustments to get good exposure and balance of lights, each clip is adjusted to a certain color temperature set previously for the scene. Also monitors or even your eyes - end up adjusting to the light and color of the environment. The steps of color correction can directly affect the maintenance of information in the image, the ideal would be to follow the following order of operation: ---> Remove artifacts and use de-noise; Balance your plans by black/mid/white, saturation and white balance; Re-illuminate within a plane using power windows or masks; Add gradients, diffusers, and other lens filters; Add vignettes; Grading the images; Simulate any film you want; Resize and add detail. <------ You don't have to perform all of these steps in each plan, but if you're going to perform them, that would be the ideal order. Color Grading is the creative process where decisions are made to establish or create the desired climate for scenes through software, such as: accentuating certain colors, emulate styles, among other choices. Being something purely creative there is no right or wrong way of doing things - just what the director of photography, colorist and director want for the scene. The challenge of making the right choice, the tools available are several and powerful, the question is how to use them accurately for what the film asks. Nowadays it is possible to create or download LUTs files for cameras like BlackMagic and RED for free on the internet so you can practice your colorist skills in DaVinci Resolve (also free) or even in video editors.Working as an editor, taught me the importance of security: exporting files into the right format - the container format of a video file, an audio file & metadata -, and how it affects the video broadcast or Video Streaming Protocols “container” or “package” that’s used for video transmission - storage and playback -: understanding the need of an MXF metadata wrapper - primarily exchanges pictures, sound (synthesized audio or MIDI), and generally static elements (still graphics or text) as well as data items (teletext or closed-caption files) - along with a small amount of metadata to take advantage of the benefits of converting video formats. I Questioned on how to support multiple protocols simultaneously - "What protocol to use when sending video to multiple devices simultaneously?" -, transcoding and transrating; also on how to produce different resolution and quality streams to deal with different bitrates and decoder/players justify multi-channel encoding -adaptive bitrate encoders which produce multiple profiles for compatible destinations to choose from, and transcoding media servers–which are software and services that let you manipulate and multiply your source video streams to suit your application. Different protocols are designed for different applications - Streaming protocols allow video that has been encoded to subsequently be transported, either in real-time or at a later time. Protocols affects rather how a user/viewer might interact with the video, the reliability of delivery of that video stream, or which devices/software players can access it. Some protocols can only be used by specific vendor hardware, significantly reducing the interoperability and potential reach of that content; Latency will be a key component as protocols that are used across cloud or public internet may be different than protocols used for facilities AV infrastructure over IP. There are five common streaming protocols that professional broadcasters should be familiar with, including HLS, RTMP, SRT, MSS, and MPEG-DASH. Assume an organization has added new equipment capable of generating very high resolution, such as 4K, using codecs that produce a small-enough bandwidth might be enticing. But the codec and/or encoding profile used directly from the source to mitigate its bandwidth use may not match what is the optimal codec or encoding profile for content distribution at large. Transcoding can be expensive - Archiving the highest resolution content to avoid storage costs doesnt resolve technical issues. Distribution always requires well-established technologies for maximum compatibility and reach.
Streaming Protocol vs. Codec vs. Container Format: Logically, different streaming codecs are used for different purposes. MXF is a container format that the stand-alone Adobe Media Encoder can encode and export movies. Premiere Pro can export MXF files containing the MPEG-2 essence items that comply with the XDCAM HD format or other broadcast media. All Apple ProRes codecs support all frame sizes (including SD, HD, 2K, 4K, and 5K) at full resolution as the data rates vary based on codec type, image content, frame size, and frame rate - applying the least compression for the best imagery and resulting in the largest files can be expensive overtime. So the question before choosing the protocol or format should be "Why transcode when it increases file size significantly?" It Only makes sense when you need to convert to an editing codec or deliver in a specific codec (e.g.; iPhone, Vimeo, WMV, etc); The editor is forced to convert highly compressed and not deigned for editing memory cards content to an edit friendly codec, which is usually much less compressed and that means larger file sizes on compatable hardware that actually supports it - updated disks and pc. Today, adaptive bitrate streaming technology automatically detects users’ bandwidth and computer processing availability in real time and provides a media stream that fits within these constraints. In enterprise and media and entertainment encoding, means that video sources are often sent at their maximum quality and resolution profile, but the local encoder and/or streaming server also create additional stream copies of the source in reduced settings. A 4K source, for example, can be kept in 4K and decoded at an appropriately powered viewing node. But the same 4K source can comfortably supply the same source content onto tablets and smartphones that often have a lower resolution screen anyway and the corresponding reduced resolution stream is served to match what the wireless network and processing power of these wireless devices can handle ( “scaling” of video sources). In fact when a video is encoded into a certain format, its data is compressed in order to be stored while consuming less space. For example the newer HEVC codec can dramatically reduce the file size of H.264 videos – by up to 50%. The M2T file extension - respondes to the technology standards between the studios / broadcast, for faster transfer between media servers - as it was mostly used for broadcasts that were integrated with ATSC (Advanced Television Systems Committee) standards - Developed for satellite broadcasts as well as terrestrial broadcasting applications where bad signal issues are common, the stream synchronization technology and error correcting features that are embedded into these .m2t files help improve the overall quality of .m2t streams. These m2T files usually contain digital motion graphics or animations, audio data, sound clips and effects, 2D and 3D graphics as well as text content for sub titles and so on. The video content stored in these .m2t files has a maximum resolution of 1080i. This understanding makes the exported video file compatible with a wide range of playback systems that rely on the M2T or MXF standard for broadcast and archiving ( Digital Cinema Package (DCP). Ideally it should have hardware support, which can sometimes be a problem as it takes time before newer codecs are supported at a hardware level.Video Encoding: The Definitive Guide [Updated for 2021] - What Are the Benefits of Different Video Formats? Video compression became an important area of research in the late 1980’s and 1990’s and enabled a variety of applications including video storage on DVD’s and Video-CD’s, video broadcast over digital cable, satellite and terrestrial (over-the-air) digital television (DTV), and video conferencing and videophone over circuit-switched networks. Nowadays, despite what has just been referred to - when dealing with camera raw images , Apple ProRes 422 LT is the most used norm intended for offline workflows - because it requires low data rates but full-resolution video. Even though, it solves some offline production issues - on the other hand, this is a real problem for the cloud or online video streaming , specially when there are more than one person accessing the same data ( server overload): An average 90 minute Apple ProRes 422 HQ is around 150GB ; This means it's going to take several days to upload, with the computer processing the upload uninterrupted all the time (running day and night). Real networks are heterogeneous in rate : streaming video from home (56 kbps) using modem vs. corporate LAN (10-100 mbps) ONLINE LIBRARY 2020 | Next‐Generation Video Coding and Streaming
Virtual infrastructure : cloud infrastructure and virtualization solutions and services.
Network Virtualization is Different from Server Virtualization ( this, plays a significant role in cloud technology / Cloud computing) : at the physical level, network operation depends on specialized, shared programmable hardware - packet forwarding hardware - and associated real-time software . The flexibility of using the cloud endeared corporate consumers, for the following reasons : On-Demand Service – use it when needed. This provides some degree of freedom for the customers. Network Access – utilizes the internet and can be accessed using laptops, workstations, and smart phones. Pooling of Resources – resources are pooled to provide customers customizable variable costs based on business size. Scalability – scale up or down based on your current needs. Cloud Computing Service providers offer different service models according to the customer’s needs. The service models are called SaaS, PaaS, and IaaS. - models that are often depicted in a pyramid-like structureThe recent example of Netflix and other services such as Facebook, who agreed to reduce the quality of its streaming service in order to help internet infrastructure cope with the increased traffic caused by the Covid-19 coronavirus - reveal how this issue will be more problematic over the years, as it obviously anticipated the increase in remote work, this will likely have broad implications on Internet speed ( lower upload Mbps - Megabits per second) and economy — particularly consumer spending. -- However, until then, quality-to-filesize ratios will always be important, particularly in the age of 4K and ever-increasing screen resolutions. If someone tries to upload lossless on a less-than-amazing internet connection, they'll get frustrated and quit. As displays increase in size, compression techniques become more efficient, playback devices become more sophisticated, and internet connections improve, so will the quality 4K videos . The fact they have a larger file size, means they’ll require more bandwidth and storage space to upload - so it’s not available to all users. Have you ever questioned : why software editing programs or web app - simmilar to instagram or facebook - take so long to upload / read a video and why it crashes so much or even overheats- mobile phones and pc çomponents? - as you work with Higher resolution ? Equivalent incompatible issues, also happen to disks that don't support 4k ( ex : 5400 rpm or even 7500rpm ), when pushed to performe more than they can handle, overheats the hardware - crushes - damages data components or gets you stuck into a viral loop. The truth is that Higher resolution, requires more space and larger profit margin to maintain it and not every hardware can truly support it's scalability : you will need more drive and server's space - this can be a huge issue when creating online web sites ( quality time and product vs profit /upload quality versus upload speed : ) - It's also a time consuming experience. Actually downscale 4k to HD will increase image detail without loosing quality - after considering a set of resolutions, bitrates and settings used for high-quality video encoding, and the reasoning behind those choices. - high-resolution HD images can carry more detail than their lower-resolution SD counterparts. 10-bit images can carry finer gradations of color, thereby avoiding the banding artifacts that can occur in 8-bit images. The 3 key properties of digital images that contribute to image quality—frame size: chroma sampling, and sample bit depth— while offering industry leading performance and quality at each supported data rate. The role of a codec is to preserve image quality as much as possible at a particular reduced data rate, while delivering the fastest encoding and decoding speed. At the end of the day, you will always have your raw image metadata backup data protected and valuable - Intellectual property, patents and licensing play an increasingly important part in the development of video coding technology and applications.
BOOKS & ARTICLES | 2020: GPU VS CPU For Video Editing
VIDEO NETWORK | 2020: Editing in 4K: Minimum System Requirements / To start editing video, you’ll first need to ask yourself some questions about your projects. What type of footage will you be editing: R3D, CinemaDNG, ProRes, XAVC S, mp4? How complex are your projects: single shot, single camera, multi-camera, animation, VFX? What are your output formats? How long do you have to deliver your edits? Finally, are you editing online or offline?Take in consideration that clients have different hardware and internet conditions ( this may change also acording to country and Global Web Protocol). So when dealing with high resolution video, If anyone has had media crashes (errors) I would recomend you understand your camera hardware first - including the speed of the SD card , when shooting high frame rates ( video 24 fps is ideal for a cinematic look and feel; capturing more motion for great slow motion and smoother video requires higher frame rates ); also, choose the frame rate acording to the subject your shooting ; Then analise the properties of the raw footage and all the hardware and applications that will interconect with that footage, to avoid incompatibilities and delays and misunderstanding between teamworks - . Understand Efficient video compression and improve quality of video in communication for computer endcoding applications and monitor refresh rate first , way before using Utilities tools like Dropbox ( FTP alternative file transfer ) - use it for larger media files to avoid compression problems on big files; Avoid Google drive for heavy raw footage - it will damage some data Properties when it automaticly compresses for you and it will be Extremely Slow -; Besides that Google Drive and Google Cloud Storage are not the same . There is a difference between Consumer vs. Professional Cloud Storage. Rather use it for already edited movies, to avoid crashes and media compression errors - They're not the most secure platforms to use as a server ( only for risky shifting alternatives), althought it's tempting cheapest - rather , consider other Fiable Technology, in order to deliver the best to safe-guard your privacy and media data because it's very easily hacked .
RESEARCH | 2020: Why Formats Matter
RESEARCH | 2020: The Importance of Codecs, and Containers
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Things started accelerating when Apple and Google created an ecosystem of mobile developers. That’s when mobile apps appeared - Now Microsoft, Facebook and other tech companies claim a virtual reality universe is the future of the Internet. ex: Facebook the largest social network - suggests that individuals are creating a virtual-network consisting of both bonding and bridging social capital. Facebook has previously funded academic research into the social impact of AR wearables and solicited VR hardware proposals. Facebook’s announcement blog calls the metaverse the “next computing platform” and says that the company will be working with policymakers, researchers, and industry partners while building the successor of the mobile internet. ; in addition, important services like Airbnb, Instagram, uber, WhatsApp, Amazon , are also collecting data — . As a result, our relationship to our devices is changing. They are becoming avatars—extensions of us. Millennials want mobile products in their image:confident, connected, and open to change. Only users decide what works for them.
As a consumer and researcher I realised that In order to keep customers interested in a product or service, requires a good posting strategy / business plan, because It will mainly reflect on the product you're selling and call upon the target public - nurturing symptomatic cause and effect. Providing regular content might sell an image of a great marketing strategy as it helps your customers stay updated, plus it shows interest for ways to improve and provide even more value. However, as important as regular content provides a great marketing strategy and helps customers stay updated and product improved, it will be pretty useless if there is no consistent approach - whenever planners are more concerned with how it will look to visitors, than quality engagement can create a misleading culture. - "What does that mean?" - You might choose to simply paste massive posts in hope someone eventually will spot the work and follow compulsively , but that doesn't mean quantity justifies quality. Why? Because different platforms usually means different audience. Many of Instagram followers will not be LinkedIn users and vice versa. It's important to question everything that every business is mostly concerned with: - What product/services do people want to buy and why? ; - What kind of websites do people mostly visit? - What are the biggest hobbies nowadays and how can my product/service help? ; - What types of posts do people share the most? Hitting the Market with the Right Products at the Right Time : In practice, creating new product and service development is complex daunting task for engineers, design engineering managers, and those involved in other functions in the project. Product design and engineering processes and their management to sources of innovation, needs collaboration with suppliers, and knowledge providers (for example, creators ), and users. The managerial aspects get ample attention as well as the socioeconomic aspects in the context of product design and engineering. ...... The five key strategic stages: * Where are we now? - Strategic and marketing analysis * Where do we want to be? - Strategic direction and strategy formulation * How might we get there? - Strategic choice * Which way is best? - Strategic evaluation * How can we ensure arrival? - Strategic implementation and control ......Writing software is like making commercial products—quality is absolutely fundamental. So, " How can you guarantee software purpose ?"; "Why some digital innovations and experiences engage us deeply and spread widely, and why others do not ?" - Each individual company swears by its own specific game plan the ins and outs of process management and business scenarios analyses - but in practice , all goes back to understanding the consumer engagement between internal and external human factores. For many consumers, old habits are hard to break , which is probably why it’s so difficult to get consumers to try new products in the first place — not to mention make them loyal buyers. Clients are no longer looking for a social status brand, they tend to offer price for the experience.
In addition, the challenge between growth and agility, compliance and transparency pressures increasingly bring data quality issues to the fore — while the infrastructure can’t keep up, Marketing is facing the same problem. Too much data with not enough structure in place to manage the data and not enough meaningful application. A enterprise will always need to tackle information in effective ways, just as most of them still need industrial techniques to make their products cheaply and efficiently. So what are the consequences of a business world with “too much information”?
Since the breakthrough advances in science, technology, and philosophy—including cybernetics, bio-engineering, nanotechnology, machine intelligence, synthetic biology or transhumanism — We have created Dystopian Cityscapes - vulnerabilities , not only the conditions that allow animal viruses to cross over into human populations, but also mind game-changes to everything you have ever known about spiritual warfare and disrupted ecossystems - From the tactile mirror to the virtual body, It's scalability and territoriality origins are being monopolized on money and measured by uneven interests of power - through the appropriation of privacy and vigilance - mapping territories from packages to people : the medium cloud, social movements on mobile, Big Data and IoT are Transforming Physical Security in the Digital Age , explore the major technological forces currently driving to digital disruption. Citizens are not clients or customers , rather have human rights (democratic citizenship) . Besides the Economic / financial market, Social, Health, Environmental framing, The COVID-19 crisis is a chance to do capitalism differently .
RESEARCH | 2020: Good decisions on bad data are just bad decisions that the business hasn’t made yet
Everyone is talking about value whether you recognize or not, and no one seems to understand it - especially in a world of constant change due to economic/ cultural / social transitions. The Internet is dirrectioning to a new Architecture level that aims to put users first, while it begins to consider a policy approach whichever fosters the necessary innovation and investment, that allows flexibility or experimentation. Even so high-tech architectural transition takes time to scale up globally, I can't be sure if value is subjective according to the business model that no one wants to be known and everyone is repeating - selling concepts and actually believing in it blindly. I see Value as a consequence of continuity flow, not a short-scale projects plan state of art that requires close cooperation between the platform owner and key developers. Priorities change depending on a customer’s context. Taking this context into account before you think of a value proposition for that customer is crucial. .Value propositions and business models are always designed in a context. Whereas the idea of the internet as a democratic source of information that brought people together, the web algorithms has filtered what someone searches, exposing along their interests, creating an echo chamber bubble of one’s own held opinions and increase aggressive sense of superiority within bully attitudes that trigger misleading meanings, reproducing cancel culture with no space to listen - therefore reducing communicative competency to engage in human dialogue in real life. Besides freedom of speech restrictions and constrains within privacy and security issues, it obviously effects in building human connections : forecasting a false sense of effective communication ; reflecting direct health implications, misconception perception and discursive gap - that in turn , works against isolation, antagonism and stress. So being aware of this inconsistency can bring to light the nature of problems / conflict . Whenever someone is hiding the bad parts with quick solutions, I presume it can only cause a bigger structured problem - essential due to the fact that a responsible worker likes to be part of a project that brings prosper competence - to be taken serious. Oppositely, the qualms Quality and vitality of Value are not the same. There's no doubt that this will determine and reveal the real life cycle of the project - Above all , put any doubtful percentage of risk and strategy apart from the forward trust towards other active competitors.Hacknowledgments : “I invented nothing new. I simply assembled the discoveries of other men behind whom were centuries of work. Had I worked fifty or ten or even five years before, I would have failed. So it is with every new thing. Progress happens when all the factors that make for it are ready, and then it is inevitable. To teach that a comparatively few men are responsible for the greatest forward steps of mankind is the worst sort of nonsense.” ― Henry Ford
Programming Languages
RESEARCH COLLECTION | 2018 - Best Programming Language for Machine Learning
TOOLKIT : Internet of Things (IoT) Cloud Based Photon Particle WiFi Microcontroller.
BOOKS & ARTICLES | 2020 : SITEPOINT
RESEARCH | 2020: JavaScript community members as they share their new found visual coding skills.
DESIGN IOT | Galaxy Watch Studio
Future Perspectives
SAFe principles:
#1 – Take an economic view
#2 – Apply systems thinking
>#3 – Assume variability; preserve options
#4 – Build incrementally with fast, integrated learning cycles
#5 – Base milestones on objective evaluation of working systems
#6 – Visualize and limit WIP, reduce batch sizes, and manage queue lengths
#7 – Apply cadence, synchronize with cross-domain planning
#8 – Unlock the intrinsic motivation of knowledge worker
#9 – Decentralize decision-making
#10– – Organize around value
ITCYBER_PENTEST__What is a white hat hacker? | 2022 DevOps | 2022 ITExams | 2021 || METASPLOIT - How to Use Metasploit | Meterpreter | Reverse shell | Metasploit Tutorial || || A Complete Penetration Testing Guide With Sample Test Cases || || Most Important Python Tools for Ethical Hackers & Penetration Testers 2022 || || How to run Flask App on Google Colab |||| The Open Web Application Security Project (OWASP) - Application security tools , incidence rate and standards : An Open-Space where all materials are available for free and easily accessible on your website, making it possible for anyone to improve the security of their own web applications. The materials OWASP offer include documentation, tools, videos, and forums. Perhaps his best known project is the OWASP Top 10 || Scan and extract text from an image using Python libraries || 2022 || The fastest way to learn OpenCV, Object Detection, and Deep Learning. || 2022 ||
WEB TOOLS - Is this website Safe | 2022: How to Check Website Safety to Avoid Cyber Threats Online
ARTICLE | 2020 : Apple is rumored have a secret team of hundreds of employees working on virtual and augmented reality projects. FOOTNOTE: STREAMING SHAKEOUT REPORT | 2020: A massive media shakeout is on the horizon — The war for streaming video has officially begun. ARTICLE | 2020 : Apple TV+ to offer augmented reality content as bonus. Some new iPhones will include Lidar 3-D scanners used in the latest iPad Pro, making AR apps quicker to load and giving them a better sense of their environment. ARTICLE | 2021: Meet ALGO the alternative non-commercial VPN and Why Would I Need One ? - Algo automatically deploys an on-demand VPN service in the cloud that is not shared with other users for FREE, relies on only modern protocols and ciphers. The ‘VP of all Networks’ is strong, secure , tidy and includes only the minimal software you need. virtual private network, is a secure tunnel between your device and the internet for torrenting or bypassing geographic restrictions to watch content in a different country. VPNs protect you from online snooping, interference, and censorship. It makes web browsing more secure and stymies any malicious actors who might be on the same local Wi-Fi network. Virtual Private Network, allows you to create a secure connection to another network over the Internet. VPNs can be used to access region-restricted websites, shield your browsing activity from prying eyes on public Wi-Fi, and more. Companies all over the world sell VPN services to secure your online activity, but can you really trust a VPN provider? If you want, you can create your own virtual private network with the open-source ARTICLE | VPNs aren't perfect, here are some alternatives VIRTUAL MACHINE RESOURCES | 2021: A VM simulates a CPU along with a few other hardware components, allowing it to perform arithmetic, read and write to memory, and interact with I/O devices, just like a physical computer. Requires a moderately powerful laptop that supports hardware virtualization - SharePoint solutions, to work on multiple programming languages, libraries and operating system features to support multiple projects. Most importantly, it can understand a machine language which you can use to program it. PENTEST TOOLS | 2022HARDWARE RESOURCES | 2021 :Apple reportedly working on a 32-core processor for high-end Macs
Resources on WebGPU vs Pixel Streaming | 2021 : " Two completely new technologies to develop modern graphics-focused software are on the rise. " WebGPU is the successor to WebGL and offers remarkable performance improvements. However, pixel streaming (or render streaming or remote rendering) makes it possible to stream the audio-visual output of a hosted cloud software to the client. The client does not need expensive hardware — only a good internet connection - so it goes in a completely different direction and is actively used by the gaming industry." STREAMING | 2020: Streaming Wars - A Tale Of Creative Destruction . In the battles over consumer attention and subscription dollars, content, and talent, each new-to-market service has its own strengths and weaknesses.RESEARCH | UPDATE 2020 : The 10 most innovative virtual and augmented reality companies of 2020
ARTICLE | 2020: How does photogrammetry work? 3D content creation made easy.RESEARCH | 2020 : UPDATE of Feb 21, 2020: Is virtual reality the next channel for digital marketers?
ARTICLE | 2020: Is Augmented Reality the future of contact-free shopping?LEARN COLLECTION | 2020 - The 7 Steps of Machine Learning / TensorFlow Playground
Tools
API TOOLS | 2021: Google Maps JavaScript API Tutorial - BEST EXPLAINED WITH ATOM PLATFORM -
|| METASPLOIT - How to Use Metasploit | Meterpreter | Reverse shell | Metasploit Tutorial |||| A Complete Penetration Testing Guide With Sample Test Cases ||
|| Most Important Python Tools for Ethical Hackers & Penetration Testers 2022 ||
|| How to run Flask App on Google Colab ||
Scan and extract text from an image using Python libraries || 2022 || The fastest way to learn OpenCV, Object Detection, and Deep Learning. || 2022 ||WEB TOOLS - Is this website Safe | 2022: How to Check Website Safety to Avoid Cyber Threats Online
COLOR TOOLS | 2021: Color Palettes for Designers and Artists
IOS APP TOOLS | 2021: iOS Course Resources List
RESEARCH | 2020: What Is the React.js Framework? When and Why Should I Use React.js in My Project?
TOOLKIT | 2020: Face detection using HTML5, javascript, webrtc, websockets, Jetty and OpenCV
TOOLKIT | 2020: Object Detection with HTML5 getUserMedia
SOLIDIFY RESOURCES 2020 | How do you get Ethereum account’s address, create a new Etherem address and how to send /receive cryptocurrency? TOOLKIT : Tools for Working with Excel and PythonTOOLKIT | 2020: Excel Automation Tools (Best of List) TOOLKIT 1
TOOLKIT 2 - AR / WebRTC, GWT & in browser computation -
Software Architectural
LEARN | 2020: Martin Fowler Blog - Software Development, primarily for Enterprise Applications :
RESEARCH | 2020: WebSockets - A Conceptual Deep-Dive RESEARCH | 2020: What is Serverless Computing BOOKS | 2020: Web Application Vulnerabilities: Detect, Exploit, Prevent By Steven Palmer RESEARCH | 2020: What Is Internal Audit’s Role in Cyber Security?RESEARCH | 2020: Embedding a Tableau Visualisation with the JavaScript API
RESOURCES FOR TRAINING | 2020: Paper with codeTeamwork Skills & Project Management:
This is why I believe it's a bad choice to underestimate a worker point of view , while it could bring the project onto another level of success and understanding, on how it could fill the gaps - to make it better resistant throughout time . Time is an element of success in a world where technology knowledge and artificial intelligence accelerates everything connected in between and around. Everyone should have the right and freedom to speak without putting their position at risk if their opinion helps for the project why not listen and try to understand? Nevertheless, it's important for every team element to understand the life circle of the business plan so it doesn't contradict it's true values of the project . One must question constantly: "Is it a short, medium or long plan" ;" What are the true intentions behind concepts? ". I believe Team members should be informed of the true values so that their work adjust to the company values - equilibrium of knowledge -. If everyone works towards that , it's an easier process and the risk percentage will decrease . - Ethic and professional attitudes count and turn it into a win-win game. It identifies and gives true perspective to projects. RESEARCH | 2020:: Communication in organizations is equivalent to the neural network in the human body. If there is a misfire, the organism becomes inefficient or even dysfunctional.
OPINION | 2020: Where Are We in 'The Cycle'? - viewed through a business-cycle -
######
" High personal accountability:1. Drive for Results. Sometimes in organizations, it is really hard to focus. When we are sending multiple messages about what is critical and what others are accountable for, accountability dissipates. If you want people to be responsible, then you must clearly define the results that you want them to deliver, and let them have a fair amount of control on how they deliver those results.
2. Honesty and Integrity. When your boss asks in a company meeting, “how’s that project coming?” do you honestly reply, “we are really behind” or “pretty good?” Those who are accountable have the courage to tell the truth. This courage is often reinforced because people see their managers being open and direct with them.
3. Trust. We did some research on a set of leaders who were not trusted and found their employees had the following issues: I am not confident that my efforts will be rewarded PROMOTED I suspect the leader may take advantage of me I constantly question the leader’s motives I am sure they will take credit for my accomplishments These are not factors that will build accountability. In contrast, the three pillars that build trust are positive relationships, knowledge, and consistency of leaders.
4. Clear Vision and Direction. There is an old Chinese proverb that explains this issue well: “The hunter that chases two rabbits catches neither one.” In organizations, people are often chasing multiple rabbits and they don’t catch any of them. How can you expect people to be accountable if they aren’t absolutely clear about the organization’s vision for where they’re going and what needs to be accomplished? Clearly, you can’t.
5. Problem Solving and Technical Expertise. It is impossible to feel accountable when a person is confused and doesn’t know how things work. Teach your people the skills and give them the training they need, and make absolutely sure they know how to do the job you expect.
6. Communication. When a leader can effectively communicate, others can understand what they are accountable for. This requires being able to tell, ask, and listen to others.
7. Ability to Change. We found that people who are really good at creating change in an organization had employees who are operating at higher levels of accountability. Leaders who are good at instituting change are effective at the following behaviors: accepting feedback, taking on challenges, innovating, spreading optimism, showing concern, and setting clear goals.
8. Collaboration and Resolving Conflict. Collaboration is a difficult skill to achieve in an organization. Are you cooperating or competing with others in your group? Peter Blow at Columbia University did a series of studies on this issue that showed that teams who collaborate and are cooperative are far more successful than those who compete. Cooperation breeds accountability. On the long personal and organizational “to do” list, accountability should be at the top of the list. If you see a fatal flaw in yourself or your current leaders on any of these eight points, you should address it immediately. In fact, the single greatest way to leverage accountability is to pick a few of these key behaviors to work on yourself. Why? The research is clear on this issue: great accountability in the organization begins with you."
🔗 ProperTree: https://github.com/corpnewt/ProperTree
🔗 MountEFI: https://github.com/corpnewt/MountEFI
🔗 OC-Gen-X: https://github.com/Pavo-IM/OC-Gen-X/releases
💻 Command to make bootable installer for macOS Big Sur:
sudo /Applications/Install\ macOS\ Big\ Sur.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume
🔗 Find the commands for other versions of macOS -----> HERE <----
🔗 Links for SSDTs (make sure to select the correct processor type): https://dortania.github.io/Getting-Started-With-ACPI/ssdt-methods/ssdt-prebuilt.html#intel-desktop-ssdts💻 Device Properties for Coffee Lake: PciRoot(0x0)/Pci(0x2,0x0) AAPL,ig-platform-id 07009B3E framebuffer-patch-enable 01000000 framebuffer-stolenmem 00003001 💻 Boot-args: -v keepsyms=1 debug=0x100 alcid=1 prev-lang:kbd en-US:0
The RedMonk Programming Language Rankings | June 2020
BOOKS | 2020: Download Programming Books
Update FREE LABS TO TEST YOUR REDTEAM/BLUETEAM and CTF SKILLS :
Share with your network and friends.· Attack-Defense - https://attackdefense.com
· Alert to win - https://alf.nu/alert1· Bancocn - https://bancocn.com
· CTF Komodo Security - https://ctf.komodosec.com· CryptoHack - https://cryptohack.org/
· CMD Challenge - https://cmdchallenge.com· Cyberdefenders - https://lnkd.in/dVcmjEw8
· Ctftime- https://ctftime.org· Dfirmadness - https://lnkd.in/dNkzQvXH
· Explotation Education - https://exploit.education· Google CTF - https://lnkd.in/e46drbz8
· HackTheBox - https://www.hackthebox.com· Hackthis - https://www.hackthis.co.uk
· Hacksplaining - https://lnkd.in/eAB5CSTA· Hacker101 - https://ctf.hacker101.com
· Hacker Security - https://lnkd.in/ex7R-C-e· Hacking-Lab - https://hacking-lab.com/
· HSTRIKE - https://hstrike.com· ImmersiveLabs - https://immersivelabs.com
· LetsDefend- https://letsdefend.io/· NewbieContest - https://lnkd.in/ewBk6fU5
· OverTheWire - http://overthewire.org· Practical Pentest Labs - https://lnkd.in/esq9Yuv5
· Pentestlab - https://pentesterlab.com· Hackaflag BR - https://hackaflag.com.br/
· Penetration Testing Practice Labs - https://lnkd.in/e6wVANYd· PentestIT LAB - https://lab.pentestit.ru
· PicoCTF - https://picoctf.com· PWNABLE - https://lnkd.in/eMEwBJzn
· Pwn college- https://dojo.pwn.college· Portswigger Labs - https://lnkd.in/dF8nFyEN
· Root-Me - https://www.root-me.org· Root in Jail - http://rootinjail.com
· Rangeforce- https://www.rangeforce.com· SANS Challenger - https://lnkd.in/e5TAMawK
· SmashTheStack - https://lnkd.in/eVn9rP9p· SnapLabs - https://lnkd.in/d-yGATs7
· The Cryptopals Crypto Challenges - https://cryptopals.com· Try Hack Me - https://tryhackme.com
· Vulnhub - https://www.vulnhub.com · Vulnmachines- https://vulnmachines.com· W3Challs - https://w3challs.com
· WeChall - http://www.wechall.net· Zenk-Security - https://lnkd.in/ewJ5rNx2
#ctf #pentest #redteam #blueteam #hacking #informationsecurity #cybersecurity
Lean Budgeting:
In the early 1980s, home computing was booming around the world as millions of people bought their very first machine from Commodore, Sinclair, Oric, Acorn, or Atari. The curiosity about merging digital arts with technology has been a part of my life since then.
I refocused my interest in programming - integrating science and arts - because I saw how traditional creative media companies' business strategies have been struggling to stay competitive in today's media. They have been unable to invest in new models due to their resistance to change and incapable of talent retention due to the lack of income investment requirements.
The consequences are more than visible now. Consumers have been canceling their pay-TV subscriptions in favor of internet-delivered alternatives since 2011. I recommend you to read the following book: The Netfix Effect"
<----click to see video --- New forms of production, distribution, and exhibition imply different ways of thinking, doing, and experimenting.
|| STORYset | Awesome free customizable illustrations for your next project ||
|| REBNDERMAN Tutorials | RenderMan Fundamentals || || Linux Open source video editor. Free and easy to use for any purpose, forever || || Top 10 Best Cyber Attack Simulation Tools to Improve Your Organization Security – 2022|| || Penetration Test Reports || # INSIGHTS ON LIVE CODING | Feb 2021 *UPDATE : Programming languages used to simulate spatial computing - visual surround sound HRTF , using software/hardware tools
RESOURCES 2021 | AI and the future of the mind: Is it possible that we'll merge with AI?