"In the electrifying realm of technology, networking acts as the intricate web stitching our digital world together, where information flows like a vibrant stream connecting devices and minds."
Explore the curated list of free cyber security certifications below:
✅1. Introduction to Cybersecurity: ✅2. Cybersecurity Essentials: ✅3. Networking Essentials: ✅4. Intro to Information Security by Udacity: ✅5. Network Security by Udacity: ✅6. Fortinet FCF, FCA:✨✨ ✅7. Information Security by OpenLearn: ✅8. Network Security by OpenLearn: ✅9. Risk Management by Open Learn: ✅10. Certified in Cybersecurity℠ - CC: ✅11. CCNA Security Courses: ✅12. Network Defense Essentials (NDE): ✅13. Ethical Hacking Essentials (EHE): ✅14. Digital Forensics Essentials (DFE): ✅15. Dark Web, Anonymity, and Cryptocurrency: ✅16. Digital Forensics by Open Learn: ✅17. WS Cloud Certifications (Cybersecurity) : ✅18. Microsoft Learn for Azure: ✅19. Google Cloud Training: ✅20. Android Bug Bounty Hunting: Hunt Like a Rat: ✅21. Vulnerability Management: ✅22. Software Security: ✅23. Developing Secure Software: ✅24. PortSwigger Web Hacking - ✅25. RedTeaming - ✅26. Splunk - ✅27. Secure Software Development - ✅28. Maryland Software Security - ✅29. Stanford Cyber Resiliency - ✅30. Cyber Threat Intelligience -✨ ✅31. ITProTV - FREE preparation Exams IT - ✅32. 15 Free CISA Courses - ☀️☀️☀️☀️☀️☀️☀️☀️ ☀️ Penetration ethical hack ✅ arcx ✅ Training ✅ ITMasters ✅ Penetration Testing ✅ TRAINING COURSES ✅ Introduction to Internet of Things ✅ CYBER TRAINING COURSES ✅ TRAINING COURSES ✅ FREE Cybersecurity Education Courses ✅ Junior Incident Response Analyst ✅ CYBER THREAT INTELLIGENCE 101 ✅ Penetration Testing ✅ TCM SECURITY - Intro to Information Security ✅ GT - Network Security ✅ Ethical Hacking Essentials (EHE) ✅ Responsible Red Teaming ✅ Developing Secure Software (LFD121) ✅ EC-Council MOON Certifications ✅ CCNA SHEET SUMMARY PDF ✅ EC-Council MOON Certifications ✅ HR Fundamentals And Best Practices
The Google Lisbon event covered a wide range of topics that impact technological development. Discussions included the role of ethics, security, resources, and budget in shaping development through Spatial Computing -
evolving VR content and 360°video Ads ; Iris-finger Sensor Technology or voice recognition sdk techniques; Dynamic displays - computer vision, creative studio tools and automated vehicles ; Artificial Intelligence (AI), Machine Learning (ML), Business Intelligence (B.I), Data Science / Big Data, deep learning hardware ( NVIDIA GPU ) and software ;
Additionally, the talks delved into AI-generated art using deep neural networks to replicate, forecast, prevent, and learn generative algorithms with adversarial networks (GANs), recreate or blend styles of artwork ( ex. Botto, AI-DA, Obvious ) ;
Other topics included IoT for Consumer & Commercial vs. Industrial ;
Google Ads & Commerce strategies to Rank Websites, search optimization & bringing in traffic -->
Besides the breakthrough advances in science and technology, such as articulating tools through big data analysis or in combination with machine learning methods, automated cameras or sensors, the discussions also focused on the future of digital security.
Topics such as Biometric identification , the potential risks of facial recognition technology and the need for stronger data protection laws.
These measures not only enhance cybersecurity but also safeguard the user's digital identity tracker .
After some point, everyone needs to be aware that all technology has security risks .
It's difficult to decide which of them is more feasible and secure, since "+" security = "-" privacy;
The talks also touched on Cybernetics & Network infrastructure - , which serve as the backbone for all wired or wireless communication, from Computer Code to Genetic Code including Bio-Engineering, Nanotechnology, IOT architectures ; machine intelligence, synthetic biology or transhumanism concepts - Flipping bits into molecules , shifting atoms - DNA data storage breakthroughs
— From the tactile mirror to the virtual body : mapping territories on packages to people - through media forensics, privacy and security surveillance reform on trust, integrity, reliability-resiliency, authenticity, compliance.
Finally, the event addressed integrating Business, IT , Virtual Infrastructure Software , Cloud Strategies , and Data structures / Algorithms .
The talks emphasized implementing strong encryption and authentication protocols, such as cryptography and data structures encryption - to protect data and ensure that it is only accessible by authorized personnel.
# Update Insights on August 2023
Services transformed how companies do business even after 2017. Soon after Tim Berners-Lee invented Web 1.0, he suggested people could develop the Semantic Web . The Web 2.0 progressed rapidly since 1990s and was largely driven by three core layers of innovation: mobile, social and cloud - It became popular in the mid-2000s with the rise of CSS, JavaScript, HTML file applications along with meteoric social media growth and data centers located around the world. Today, Users rely on phones, laptops, PCs, tablets, headsets, and wearables to access information daily on both Web 2.0 and Web.3.0 . Web 1.0 first-generational capabilities merely provided information to users. Browsers could only read data on a static web page without any interactivity or searchability. Web 2.0 changed this by providing significant interactivity, where users could contribute to online knowledge without any technical understanding. Web 3.0 has four foundational pillars: semantic markup, blockchain and cryptocurrency , 3D visualisation, and artificial intelligence while using the same devices as Web 2.0 and continuing to make information resources available to the world with real-time data monitoring, tracking, and immersion. It could also perform new interactions like searches, filtering results, making data entries, and others. The process of social connectedness - triggered by social media / and mobile phones browser / app development - moved from “attract” to “extract” their users; consequently, Individuals or businesses began to suffer via higher fees or platform safety risk on Cloud types. Though as-a-service types are growing by the day, there are usually three models of cloud service to compare applications: Software as a Service (SaaS) vs (IaaS) Infrastructure as a Service vs (PaaS) Platform as a Service ---> but there are also other public or private services, such as Desktop as a service (DaaS) and Functions as a Service (Faas) or even XaaS (anything as a service — that are all integrated with the on-premises IT infrastructure to create a cohesive ecosystem called the hybrid cloud of embedded systems ; In Practice, The Web 3.0, rather than just searching for content based on keywords or numbers, can use artificial intelligence (AI) and machine learning (ML) to understand the semantics of the content on the web- to enable users to find, share and combine information more easily - " How? " - with the use of techs like Neuro-symbolic machine learning architecture , Artificial Intelligence Neural Network , Blockchain technology , 3D hyper Graphics, Virtual and Augmented Reality network, that allows machines to understand and interpret information. - Many of the recent revolutions in computer networking-including peer-to-peer file sharing, and media streaming-have taken place at the application layer. Tokenization vs Encryption - are both used today as methods to secure/protect data stored in cloud services or applications, under different circumstances. The architecture of a blockchain system can be divided into six layers and work as independent systems : data layer, network layer ( virtualized nodes, that connect to the "net operating system"), consensus layer , incentive layer, contract layer, and application layer. The Web 3.0 was built on three new layers of spatial technological innovation: edge computing, decentralized data networks and artificial intelligence Furthermore, Web 3.0’s spatial internet properties rely on immersive worlds connected via the web. These potentially interoperable worlds will provide information and services across XR platforms. More importantly, users, including businesses, students, and professionals, can overlay digital content seamlessly on the real world. This creates a spatial relationship to data, allowing both a persistent and always-on connection to the internet. Users can now merge and interact with content in their daily lives, interacting between the physical and virtual worlds in real-time. Large data centres supplemented by a multitude of powerful computing resources spread across phones, computers, appliances, sensors and vehicles marks a move towards trusting all constituents of a network implicitly, rather than needing to trust each individual explicitly - submitted to compatible interconnectivity and powered-energized durability dependency ( system of objects network ) - More of it is associated to a world that is growing faster inequality, nationalism, mistrust in traditional government/banking institutions, global warming environment duo to faster trend consumption - such climate impacts of “proof-of-work” cryptocurrency mining, as currencies require miners to compete to validate transactions on their blockchains take power-hungry servers that create air pollution and carbon emissions. While some are more useful in certain circumstances than others, the configuration, or topology of a network is key to determining its performance , functionality, connectivity and security protection. 2022 || What Are Graph Neural Networks? 2022 || Blockchain Technology and the Metaverse 2022 || Ethical Hacking Lessons 2022 || Security Resources 2022 || CiberSegurança Resources 2022 || Security Resources: Data Security Posture Management (DSPM):What Is It And Why Does It Matter Now? How does Information systems (IS) and information technology (IT) thrive in this fast-changing environment and create systems that withstand time as much as possible? -- Before learning how to create secure software and supply chain that can include people, companies, natural or manufactured resources, information, licenses, or anything else required to make your end product, you need to understand several key security concepts such as confidentiality ( keeping data safe), integrity ( data is current, correct and accurate), and availability ( measuring device was unavailable due to malfunction, tampering, or dead batteries / resilience improves availability). There is no point in memorizing how to implement a concept if you don’t understand when or why you need it. Also, knowing the reason behind security rules and Learning these principles will ensure you make secure project decisions and are able to argue for better security when you face opposition. No matter what development methodology you use (Waterfall, Agile, DevOps), language or framework you write it in, or type of audience you serve; without a plan you cannot build something of substance. There are several ways to arrange a network topology - architectural, political and logical - attempt to experiment with new economic models based on distributed networks, that reflect different pros & cons: A centralized Single Point Of Failure ( SPOF - divided by 3 categories : Hardware failures, for example, server crashes, network failures, power failures, or disk drive crashes; Software failures, for example, Directory Server or Directory Proxy Server crashes; Database corruption; ) leads to a situation in which one malfunction of the server causes the whole system to shut down, leaving all clients lose their access to the app . The scalability maintainance of the system is limited to hardware: you can increase the resources of this node by adding more RAM or hard drives (vertical scaling), but you can't add more nodes (horizontal scaling).On the other hand, the idea of Web3 Decentralization could enable a web system controlled by many entities ( local private Key ) without centralized corporation servers ( Hosted Keys ), where every client would contribute to the web (like a peer-to-peer network). The system is still vulnerable to failures and attacks (account takeover, digital identity theft; apps that contain an overly aggressive monetization library collect extensive details from users’ devices; money launderin and hacking) , but the impact is proportional to the number of master nodes impacted. The integrity of the data, to a large extent, depend on the data management capability and functionality of the repository : nodes with more computational resources than the rest of the network , dominates the verification approval of transactions and controls the content of a blockchain - it can outpace all other nodes, manipulate the blockchain, insert fraudulent transactions, double-spend funds, or even steal asset from others. It's also more scalable because you can add more nodes (scale horizontally and vertically ) that have the same control over the system ( BitTorrent as a peer-to-peer file sharing protocol). Historically, socially and politically have been powerful in sustaining an ideology of the network market as a non-coercive coordination mechanism withi Decentralized finance. Governments and corporations are increasingly pursuing a reconstruction of money as a system of control and surveillance machine. However, markets require significant legal and ideological enforcement to function in practice, often with substantial systemic coercion. A better understanding of the power and limitations of each network is necessary - it would allow for hybrid approaches and less on what it can achieve. Local vs Hosted Key aren't immune to malware, virus, worms and other attacks - There are Pros and Cons for scenarios where you need to upgrade - - nor distributed network / decentralisation dichotomy automatically yield an egalitarian, equitable or just social-economic-political resolution, in practice with ambitions for rearranging power dynamics. The increasing threats to our information, users, and systems from cyber criminals, urges several training solutions to help organizations educate staff as an essential part of running a healthy organization. Employees are the weakest link in the entire cybersecurity system. The safety training is a must-have in all companies. The slightest distraction may lead to a serious security breach . The security strategy and awareness of internal resources, with the Investment in training is crucial for the success of any security policy and should be carried out not only in theory, but also in practice, in the workplace. Observability and security are key to closing vulnerability gaps: Modern multiclouds help organizations innovate faster, but drive complexity and vulnerabilities ; Incomplete observability solutions are creating big gaps in capability. Observability powered by answers and intelligent automation is key to closing those gaps. Here too, IT Managers should know identify whether they have all the necessary skills needed to train their teams or resort to an external partner that ensures the consultancy in this area and make the assessment and audit, on the basis of which the following will be defined: technology platforms, technology processes, management and operation, mitigation and crisis management. Cyber risks are business risks. A network intrusion can cause lasting harm to your organization and its stakeholders. It can also affect your reputation, your people, and even your community. THE INCREASINF THREATS - IDENTITY & SECURITY | 2022 : With these three basic motions: more software, cloud technology, and DevOps practices, digitalization has become an intrinsic element of nearly all organizations. However, the requirement for extra software raises the security threshold, making it difficult to safeguard digital assets. Cloud computing entails the use of cutting-edge technology with varying hazards, as well as the elimination or reframing of the idea of a secure perimeter. When some IT and infrastructure risks are shifted to the cloud and others are specified on a software basis, the risks will be mitigated while reinforcing the need for permission and access. " In this regard, Kubernetes has emerged as the de-facto standard for cloud-native container orchestration. Since the launching of Google Cloud’s Google Kubernetes Engine (GKE) in 2014, a radical shift to the cloud-native system has occurred in the software environment." ___ As Google realized the potential of a new search engine platform for the growing Internet and transfered it's virtual knowledge into hardware devices, developing a network with it's own operating system - Alongside smartphones and web services - it became a place where people could search the repository of human knowledge, communicate, perform work, consume media, and maneuver the endlessly; Google also began working on experimental hardware - Reshaping the world by linking, sorting and filtering data into a number of nearly incomprehensible size ; Representing, the infinite amount of information that it provides through different services. Meanwhile, algorithms using artificial intelligence are discovering unexpected tricks to solve problems that astonish their developers, raising concerns about our ability to control them. Unlike traditional computer programs, AIs are designed to explore, make decisions and develop novel approaches to tasks (It wouldn’t be so, without DeepMind) - So we began to Question : " How is it possible for an AI system to "outsmart" its human masters? " ---- Since information exchange is increasingly happening online and optimized through interconnected technologies , business prosper the trade-offs between the central principles and values - these were issues discussed at the GDG Devfest : It began focused on how Tech resources could be used for the benefit of society towards the improvement of collective sustainable ecosystems - able to create systems that enforce and guarantee the rights that are necessary to maintain a free and open society. Ecosystems can be symbiotic or parasitic : ecological relationship cocreate value, capable of shaping the market ; wherever economics serve the people and not otherwise - bridging the 'discursive gap' between policy text and practice, unleashes social innovation for societal benefit. "How can technology and innovation become more inclusive?" - On the one hand, there is a wide gap between the pace of law and technology - on the other, there are also different types of Democracy and interpretation views. When the physicist Stephen Hawking told an audience in Portugal during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, “computers can, in theory, emulate human intelligence, and exceed it.” he was alerting that AI’s impact could be cataclysmic unless its rapid development is strictly and ethically controlled: “Unless we learn how to prepare for, and avoid, the potential risks,” he explained, “AI could be the worst event in the history of our civilization.” Hawking explained that to avoid this potential reality, creators of AI need to “employ best practice and effective management.” - "Is Your AI Ethical? " - Responsible A.I. Has a Bias Problem, and Only Humans Can Make It a Thing of the Past: As more company’s adopt A.I., more issues will surely come to the forefront. Many business are already working toward making changes that will stop A.I. problems before they go any further. It relies on several key technologies, such as machine learning, natural language processing, rule-based expert systems, neural networks, deep learning, physical robots, and robotic process automation. Some AI applications have moved beyond task automation but still fall well short of context awareness. Responsible AI is defined as the integration of ethical and responsible use of AI into the strategic implementation and business planning process: transparent and accountable AI solutions that create better service provision. Such solutions harness, deploy, evaluate and monitor AI machines, thus helping to maintain individual trust and minimize privacy invasion. “As A.I. systems get more sophisticated and start to play a larger role in people’s lives, it’s imperative for business to develop and adopt clear principles that guide the people building, using and applying A.I.” ; “We need to ensure systems working in high-stakes areas such as autonomous driving and health care will behave safely and in a way that reflects human values.” ARTICLE I : Breaking the Echo Chamber: How Clickbait, Algorithms, and Unethical Journalism Divide Us ARTICLE II: The Current State and Future of AI: Benefits, Breakthroughs, and Safety
The event held the debate on major technological forces, currently driving to digital disruption on the medium cloud, social Movements on mobile, Big Data, while IoT are Transforming Physical Security in the Digital Age . It´s not just Netflix or Facebook’s that feed drives views, privileging the stimulus–response loop recommendation system / optimizing for engagement as a key interface for content consumption. Yet this same design has been criticized for leading users towards more extreme content, other online platforms like YouTube, shape users’ world views considering new rules for the artificial intelligence programs, spreading malicious content. ---------> "What Are The Alternatives to the Danger of engagement-focused algorithms - that are Hacking the Brain with deep psychology stimulation inside preferences (neuromodulation "tuned" to activate) through “emotional surveillance technology”? : In a world where Citizens are not products, clients or customers - rather reshape public human rights , and the Economy represents a tool we humans invented - like democracy and politics - to help govern our relationships to adapt between each other - ourselves with nature and the world we live in. If these tools aren't getting the outcomes that make us happy, safe, healthy, better educated and protecting / preparing our country for an increasingly uncertain future , as quality of life is stagnating; unfair , jobs and health education systems, regardless of how much money you have or where you live; while our environment is suffering, then it's time our economic tools and practices change to embrace transformative policies that allow the data analyze to improve future business decisions and reprioritise our investments. This is where the rights approach to ethical decision making comes in, although fairness can be subject to different definitions across divergent languages, cultures and political systems : “stipulates that the best ethical action is that which protects the ethical rights of those who are affected by the action. It emphasizes the belief that all humans have a right to dignity” (Bonde and Firenze, A Framework for Ethical Decision Making). Technologies appear to achieve the same goal: to make the city more efficient, connectivity, and social harmony from a “healthy body” into an “efficient machine - signaling technologies that transformed the city into a programmed and programmable entity, a machinery, whose behavior can be predicted, controlled and modulated according to the principles established by some well- intentioned technocrat . However, the use of AI within many industries - retail to supply chain - from banking to finance, manufacturing to operations - security predictive artificial intelligence (AI) threat detection ( Identifies and blocks deep-learning based malware detection ) or deep learning acceleration built directly into the chip, Intel® hardware (-architecture, accelerators, memory, storage, software, security -) have all changed the way industries operate. In social media environments - digital marketers, created a new way to connect and engage with the target audience or media marketing performance. In oposition, it raised ethical concerns and eventually carries the risk of attracting consumers’ distrust - extracting harmful marketing appeals, lack of transparency, information leakage and identity thef. Developing AI solutions should consider human rights, fairness inclusion, employment and equality that can lead to potential gains in credibility for products and brands - ensure brand safety and protect consumers from fraud and the dissemination of fake information, thus increasing customer trust towards brands. Recognizing the value of sensitive data and the harm that could be caused if certain data were to fall into the hands of the wrong parties, many governments and industries have established laws and compliance standards by which sensitive data must be Pseudonymized or Anonymized. Europe is putting pressure on internet companies like Facebook and Google to safeguard against hate speech, trolling, cyber-bullying, fake news, sex traffickers online and terrorist activities online. The GDPR (general data protection regulations act - on MAY 4 2020) passed by the parliament of the EU aims to safeguard the data privacy rights of its citizens. While the act combined with the EU court’s “Right to be forgotten” judgment has set precedence in the way companies handle the data of their consumers. Individuals now have the "right of data portability", the "right of data access" along with the "right to be forgotten" and can withdraw their consent whenever they want , as well as intrusive online brand presence.
" Social media marketing is in transition as AI and analytics have the potential to liberate the power of social media data and optimize the customer experience and journey. Widespread access to consumer - generated information on social media, along with appropriate use of AI, have brought positive impacts to individuals, organisations, industries and society " (Cohen, 2018).-- Considering the conscious principles of compromise, chain potentially relevant questions about General Data Protection Regulations :: " How are organisations ensuring that the content posted by staff and consumers does not compromise the ethical principles of the brand - managing their social media presence in line with data protection, visual misinformation and privacy regulations ? ; What do you need to protect : on whatever occasion the adversary gains access to information that is sensitive to you? What are the risks of compromise and how to mitigate them ? What practices and mechanisms can enable firms to cultivate an ethical culture of AI use / How can digital marketing professionals ensure that they utilize AI to deliver value to the target customers with an ethical mindset? "
- --> The importance of getting interested in areas of privacy law, digital legislation and regulation of AI ethics, is recognizing digital literacy education that serves to take responsibility in civics and citizenship towards the environmental impacts of human-machine technology relationships, but also protect and question oneself values critically, bridging the discursive gap between policy text and practice into a re-formed conceptions of learning, creativity and identity in the new machine age. Examples of Seeking capital - information, social, and cultural Individuals - are applied whenever companies join uses of artificial intelligence for recruitment or machine-learning systems like In the process of seeking various types of capital through digital marketing platforms, consumers experience both positive (benefits) and negative (costs ) effects. Also, Filtering Perception and Awareness ( munitions of the mind) - starting with stone monuments, coins, broadsheets, paintings and pamphlets, posters, radio, film, television, computers and satellite communications - have been present throughout history, as propaganda has had access to ever more complex and versatile media. The velocity of information flow, volume of information shared, network clusters and cross-posts on different social media may be analyzed and compared for negative and positive electronic word-of-mouth. These intra-interaction consequences such as consumers’ cognitive, emotional, and behavioral engagement with the brand thus trigger extra-interaction consequences of brand trust and attitude thus developing brand equity through the DCM strategy. While it articulates and builds the digitally enabled capabilities required to transform their linear supply chains into connected, intelligent, scalable, customizable, and nimble digital supply networks through Synchronized planning, Intelligent supply Smart operations Dynamic fulfillment -- Digital development supply networks (Hyper-connectivity, Social networking, Cognitive computing where Matlab /excel and python can transform raw sound data into numeric data for machine learning - training accuracy for deep Learning Models - "This means that if your data contains categorical data, you must encode it to numbers before you can fit and evaluate a model. The two most popular techniques are an integer encoding and a one hot encoding, although a newer technique called learned embedding may provide a useful middle ground between these two methods." Cloud computing combined with software-as-a-service (SaaS) delivery models , 3D printing. The use of customer analytics to make smarter business decisions that generate more loyal customers. ensure how customers are having positive experiences with the company at all levels, including initial brand awareness and loyalty - crucial to the success of any business. This often leads to confusion ( 'discursive gap' ) about when and how to deploy what information technology, to maximize value creation opportunities during stages of the customer journey - usually questioning : " What is the interplay between customer traits (e.g. innovativeness, brand involvement, technology readiness) and attributes of technological platforms in this process? What firm capabilities are required to capture, manage and exploit these innovation opportunities from customers to gain a deeper understanding of them?" ---- Since There are different Types of Data: Nominal, Ordinal, Discrete, Continuous, Interval and Ratio scale -- The Netflix’s dynamic optimizer example, attempts to improve the quality of compressed video, but gathers data - initially from human viewers - and uses it to train an algorithm to make future choices about video transmission, with the aim to deliver personalized and targeted experiences to multiscreen audiences to keep them coming back - . Netflix’s algorithm library is vast but much of the content is geographically restricted due to copyright agreements - - the movies and TV shows are limited by the country. Whenever traveling abroad you may need a VPN to securely access your usual home streaming services. Because not all Netflix shows are available worldwide, many of its subscribers turn to VPNs that disguise their location and fool the streaming service into offering them a content catalog for a different region but Netflix algorithms ban most of them. Not all VPNs work with Netflix. Due to rapid growth of the digital devices and their access to the Internet caused security threats to user data; while advance measures have been adapted by the attackers, security and privacy threats has become more and more sophisticated day by day, increasing the demand for an updated technical Skills and highly secure medium to secure entities and their valuable information into the Internet. "Netflix’s machine learning algorithms are driven by business needs."AI is progressing in Broadcast & Media, through some mainstream applications , to uncover patterns that aren’t always intuitive to human perception and able to change consumer behaviours - the two most viewer-centric applications would be on content discovery and content personalization : Netflix’s new AI tweaks each scene individually to make video look good even on slow internet - It also tracks the movies we watch, our searches, ratings, when we watch, where we watch, what devices we use, and more. In addition to machine data, Netflix algorithms churn through massive amounts of movie data that is derived from large groups of trained movie taggers ; Google Is Using Artificial Intelligence to Make Sure YouTube Content Safer for Brands . It uses Deep learning, to build artificial neural networks to mimic the way organic(living) brains sort or process information, applying AI in a number of areas. |
There are dozens of reporting features and metrics to review web services. With Search Console you can monitor, maintain, and troubleshoot your presence in Google’s Search results and be aware of any warnings or issues that come directly from Google. On a technical level, Google Analytics works through JavaScript tags that run in your website’s source code and is usually operated with Google Tag Manager - – these JavaScript tags running Google Analytics set cookies on their browsers that harvest personal and sometimes sensitive data from them in return. The question arises : " Is Google Analytics GDPR compliant to use? How do you balance Google Analytics, cookies and end-user consent on your website?" Google Tag Manager is a hugely popular tool for websites of any size and shape. It organizes all third-party tags on your website (like Google Analytics or Facebook pixels), and it also controls when these are triggered. Important for website owners to know, is that almost all of such “third party tags” will set cookies that, according to EU law (the GDPR), fall into categories that require the explicit prior consent of your users. In other words, tags are what happens, while triggers are when what happens. Inceptionv3 is a convolutional neural network for assisting in image analysis and object detection, and got its start as a module for Googlenet — especially shader programming in the Graphics Library Shader Language (GLSL). RESEARCH COLLECTION | 2020 : Connect Google Analytics to Google Data Studio |
This means that new players have inserted themselves into the value chain, while hundred billion lines of new software code are added to the existing digital infrastructure of our world. So, I also personally think that a hybrid solution is an optimal answer in many software-related cases as the web development industry is currently going into a hybrid phase as well, with Server-Side Rendering (SSR) and Incremental Static Generation (ISG) data fetching options ---" They solve the problem with pre-rendering the most critical software parts on the server and everything that needs to be asynchronous will be put together in the browser ”( Szczeciński, B. (2018) 'What's Server Facet Rendering and do I would like it? ) --- . There is abundance of data in the new digital age, and can be harnessed to gather insights through application of data analytics. In order to provide oversight and governance over the collection, usage and management of data, it's also necessary to understand where the data is coming from and whether it was properly permissioned. Systems aren't always identical because clients present different demands over time —whether it's storage, CPU, or database—and these demands change over the lifecycle of the customer’s needs. So systems that can't evolve and scale don't keep up to the speed of technology changes and get stuck to technical difficulties quite often. More than ever, we are an interconnected world where the actions of one person or device in a social or physical network can have a “butterfly effect”on all of the people and devices across that Network and On Languages of Interaction ( Physical manipulation; Input using code; Mouse manipulation; Presence, location, and image; Haptic interfaces and multitouch; Gesture; Voice and speech recognition ). Alongside a higher cost technology or cryptographic schemes even in face of quantum computing - data infrastructure issues on How lattice-based cryptographic algorithms can be scaled to protect more types of electronic information - ; also, the process of creating games, environments / ads were considered to be essential scalable, captivating and engaged into clean design , to avoid user experience damage and VR security brand . Currently, the high-end tools that exist for XR creation are complicated and difficult to learn; those without a coding background have more difficult in implementation - the reason why this has added market pressure towards 360 video interactive content, when the only way to disengage with the ad is to remove the headset. “AR” was coined by Boeing researcher, Tim Caudell in 1990 and comes here in advantage to VR while XR becomes the solution in between - new value in terms of functionality, reliability, convenience or price ; Imagining ways of New and exciting medium to create unique 3D apps and experiences - shaping the future of art , it enables new applications on Smartphones, Smart Glasses, City-wide Outdoor localization and tracking : like accurate AR way-finding, visualizing urban points-of-interest, social AR, architecture pre-visualization and historical restoration. The design and analysis of geometric algorithms has seen remarkable growth in recent years, due to their application in, for example, computer vision, graphics, virtual reality, medical imaging and CAD.
Onwards — The next era of spatial computing and how Google allows us to experience 3D & augmented reality in Search : the user experience is still a primary obstacle for AR mass adoption and the biggest obstacle for VR mass adoption too ; as it is gradually gaining influence on automobile industry - In the future , people will have access to information via glasses, lenses or other mobile devices ; autonomous vehicles, drones and robots move freely environments - understanding where they are; where they are going and what is around them -. By solving the problem of inaccurate positioning from GPS to camera-enabled Scape’s VPS long term vision, many of the applications once imagined by AR developers, are now a reality and It's expected to AR revenues surpass VR revenues by 2020 - Knowadays , almost everyone owns a cellphone. Plus, mobile phones have upgraded to the required hardware for AR technology including CPU, sensors, and GPU - enabling infrastructure for a vast array of new spatial computing services , accelerated by the imminent arrival of widespread 5G networking and edge compute, delivering massive bandwidth at extremely low latency .
RESEARCH COLLECTION | 2020 - Web VR Experiments with Google
LEARNED LESSONS | 2020: Estruturas WebXR
"Customer experience with Digital Content refers to a customer’s perception of their interactive and integrative participation with a brand’s content in any digital media. " - (Judy & Bather, 2019) -
In addition to adding Augmented Reality to the product value, Microsoft has been offering MSOffice applications for its HoloLens device and showing what future offices can look like without screens and hardware. This could also point to new virtual competitors. AR apps can serve as a further direct-to-consumer channel. Some unanswered questions that are both theoretically and managerially relevant are: "How does it impact consumer-brand relationships, for instance, if consumers 3D-scan branded products and replicate them as holograms? How do consumers interact with virtual products in their perceived real world, compared to real products - what advantages and disadvantages do consumers see ? Which dynamic capabilities drive the success of Augmented Reality Marketing? Which competencies do Augmented Reality marketers need? How should these requirements be integrated into digital marketing curricula to lead for better decisions and lower return rates? How should Augmented Reality Marketing be organized and implemented - How does good content marketing or good storytelling - inspirational user experiences - are organized? What drives the adoption of Augmented Reality? What advantages and disadvantages do consumers see in virtual versus real products ? How can the success of Augmented Reality Marketing be measured?"_______________ At the End of the Event I suggested Filipe Barroso
( responsible for organizing the Lisbon Google Developer Groups Event ) that it would be invaluable to get in touch with programming schools like ETIC that we all could engage into future educational workshops together - intersecting areas and interact with the events. For a person who is learning it was important to interconnect: students expressed opened to initiatives that included group and teamwork contexts - sharing knowledge and opportunities to grow - .When I look back to this event, even though I wasn't totally prepared to understand some concepts, it did make sense later. This is the process of knowledge: to realize that even if something does not make sense back then , it will eventually connect in the Future. There are alternative ways to connect - You don’t have to follow traditional advice or go to events to successfully build and maintain a valuable network. Most are mixing bowls for professionals who are there for different reasons. While attending , I realized how Millennium Professionals don't like the idea of meetings, but contradictorily waste amounts of time on more expensive events without a good return on their investment of time and money. An activity should be meant to increase the value of your network and/or the value you contribute to it. Proper networking is about building new relationships and deepening your existing ones.
Communication in teams is equivalent to the neural network of the human body. Technology that supports collective interaction include online discussion boards and mailing list. So even after the google event, I opened a channel online - on our Computer Science SLACK called "eventos_tech" - the virtual space, where I shared all that I learnt in group, as an incentive to my colleagues sensibilize towards the importance of exchanging knowledge and being there to help the other - the notion of shared workshops and events on tech , creates or motivates towards other bigger challenges. It was also important to understand how Thee lack of an adequate project scope to contextualize the project so that it does not become dispersed or mispercept by teamwork members and even future clients; underestimating the time and effort required to deliver a task can turn a challenging project into a hellish project. Without clarity and vision we’re unfocused, going nowhere fast.> ------> Google Developer Groups (GDGs) are for developers who are interested in Google's developer technology; everything from the Android, Chrome, Drive, and Google Cloud platforms, to product APIs like the Cast API, Maps API, and YouTube API.
As the world evolves, businesses struggle to stay up-to-date with new technologies, market trends, and consumer behavior to remain competitive. This can be challenging, as businesses must continually invest in research and development, training, and innovation to keep up with the changing landscape. An adaptive mindset is essential for confronting these challenges directly. Acknowledging opportunities and managing risks are essential for sustaining long-term success, influencing both individuals and entrenched paradigms, which can be resistant to change. Change management, applicable across diverse organizational contexts, demands a structured approach akin to project management, minimizing errors and facilitating seamless transitions. The intricacies of decision-making and strategy formulation are amplified during transitional phases, highlighting the significance of comprehensive analysis. In the contemporary landscape, organizations must prioritize continuous learning, foster collaborative teamwork, and enhance communication strategies to adeptly respond to societal shifts. Achieving results hinges on engaging stakeholders and fostering a culture of collaboration, rather than succumbing to self-interest or competitive isolation. Common catalysts for change encompass technological advancements, process enhancements, crises, evolving consumer preferences, and external pressures such as market entrants, acquisitions, mergers, and organizational restructuring. These catalysts invariably impact individuals, challenging established norms and displacing comfort zones. Proficiency in Organizational Change Management is indispensable across various professional domains, spanning leaders, team members, project managers, IT specialists, HR professionals, and beyond. Integrating change management with project management is pivotal for seamless execution and optimal outcomes. Treating change initiatives as social endeavors rather than top-down mandates enhances their effectiveness and acceptance. Leveraging data to identify patterns of high performance aids in pinpointing communication barriers, inefficiencies in team structure, and spatial configurations conducive to knowledge sharing. The integration of risk management and compliance cultivates a culture of ongoing enhancement, reducing deviations from standards while optimizing resource utilization. Organizational Change or Transformation Management entails crafting robust strategies and predictive models informed by data analytics to avert false starts and optimize outcomes. With an adaptive mindset and a structured change management framework, businesses can navigate transitions successfully, realizing their envisioned objectives.
Organizations want IT methodologies that operate at multiple speeds, that knows how to balance speed and agility with stability and reliability. Professionals with agile support face the challenge of continuously generating value for customers and businesses through small changes. With the increasing influence of digital transformation, frameworks such as DevOps, Agile, and Lean have been updated to better integrate with operationalized processes and deliver measurable results. To achieve desired output objectives, a workflow is important, and sketching materials and tools like logbooks and diagrams, including all types of UML diagrams are necessary to facilitate the interplay of ideas. Creating a teamwork structure with a strong outline plan and a continuous improvement process can eliminate constant problems and deliver profitable results without feeling overwhelmed. Additionally, professionals can benefit from becoming software literate and considering social sustainable-responsible strategies. --> It works best when it's a two-way street. In my personal journey, I have taken a deliberate approach to growth by learning from diverse school models and connecting with their outcomes while questioning them, as it has allowed me to gain comparative insights and deepen my understanding. Advantageous within the visual arts learn how to distance critically, as they rebuild tools to become software literate, considering social sustainable-responsible strategies . We often hear of projects that ended with some unmet expectation in terms of deadline, cost, or quality. Programming has taught me how team visions and individual attitude towards different tasks determine performance and clarity as a network -. More than just making code result into one particular solution, I look for different approaches / comparing results on documented data - Understanding test automation. Attemptive to what experts do, enough times to change my own way of doing things. The process of mastering a skill involves dedicating time to organizing data and practicing through hands-on projects. With enough repetition, this process becomes automatic and helps build problem-solving habits while promoting critical thinking skills. This leads to a self-reflective and knowledgeable attitude that goes beyond conscious awareness and enables you to connect valuable insights about various industries, products, services, and customer needs. Similarly, when applying data to a business, it's crucial to understand the organization's approach to data and create a structured value proposition and business model that aligns with customer needs, generates scalable revenues, and establishes a proper product-market fit. Neglecting external threats and resources that don't align with critical elements of the model or failing to reinvent the model proactively can lead to failure. Everyone fails in the absence of structure, but can we all learn from it without taking it personal ? Often in our lives, we are so busy jumping from one task to another that we cannot prioritize or audit our communication channels. This can create a false sense of team culture that normalizes chaos in the workplace, leading to contradictions and flaws in the focus on value. Regardless of the quality of work produced, this attitude creates a culture of overwhelmed and overworked employees instead of promoting personal and professional growth. Toxic work environments can also emerge, leading to submission to a more powerful or authoritative entity and a sense of distrust or injustice. Do we wish to work in freedom or fear? - Defying the crowd can be difficult because nobody wants to admit they’ve made a costly mistake. - So, What is made of the personal contribution protection, the genuineness and the integrity of the creations / of works?- The protection of personal contribution, genuineness, and integrity of work becomes threatened when we do not recognize where it is originally directed. This can result in a culture of thinking hijacked by those who have no idea what they are talking about. Agile has split into various camps and methodologies - A growth percentage - from what base? - Acting kind is not the same as being human; otherwise, you're essentially a machine governed by opaque rules. Effective communication involves not only speaking clearly, but also listening actively to fully understand what's being said and taking personal responsibility. One of the most frustrating scenarios in communication is when two parties think they're on the same page, only to realize they're not. Too often, misinterpretation leads to misunderstandings and conflicts. The value chain requires exchangeable knowledge that aligns sustainably. Sacrificing quality to accelerate delivery tends to backfire more quickly than expected, as it affects our ability to continue, maintain, and adapt. Our attitude towards ecosystems and business models matters. In a world where everything is instant and on-demand, we rarely have the chance to think through recovery, even though the rewards of training and discovering new strengths and weaknesses about ourselves are immeasurable . To succeed, we need to understand business intentions, build scalable tools, manage time, and recover during projects. A project plan needs more consideration than we think. Measuring cooperation and competition reveals motivation and provides insight to either cooperate or compete. If people only work for themselves rather than a group, society becomes less functional, while untapped potential remains if individuals are irrationally pitted against each other rather than united towards progress. Collectivist cultures are better at realizing situations, while low-trusting and high-trusting people react differently. Committing to a goal correlation without skipping progress takes time to align with the learning curve of scaling teamwork performance. Overall, businesses that embrace change management and continuously learn and generate new knowledge and practices will have a competitive advantage in the marketplace.------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
📅 Blends of Data to Gain Better Insight - For data travelers , knowledge explorers and hungry network-loving life-forms - Who's Encountering, Experiencing and Exploring Nature - like to get to the bottom of things .
As technology advances, it's important to consider not only its potential benefits but also its ethical implications, security risks, and privacy concerns. The hybridization of technology creates challenges and opportunities for digital content and services - particularly in the areas of mobile, IIOT / IoT, and wearable devices:
Challenges :
Fragmentation: the wide variety of devices and platforms available can lead to fragmentation issues, where content and services may not work correctly or consistently across all of them.
Security: the interconnected nature of internet-enabled devices increases the likelihood of security breaches, underscoring the importance of implementing measures to safeguard against such threats.
Compatibility: It's a challenge to ensure that content and services are compatible with a wide range of hardware and software configurations, which can vary greatly among different devices and platforms.
Complexity: developing applications for hybrid technologies, which combine multiple platforms or devices, can be more complex than creating software for a single platform or device due to the need to consider and integrate various different technologies.
Inequalities: certain groups may not have access to the latest devices or may not be able to afford the cost of upgrading their devices, which could limit their access to digital content and services.
Opportunities :
Reach : the proliferation of mobile, IIoT/IoT, and wearable devices offer new opportunities for creating personalized experiences for users, tailoring content and services to their specific needs and preferences.
Personalization: devices offer new opportunities for creating personalized experiences for users, tailoring content and services to their specific needs and preferences.
Innovation / New business models : combination of different technologies and devices can inspire new ideas and approaches to content and service development such as subscription services, microtransactions, and pay-per-use models, that can generate revenue streams and increase profitabilit.
Accessibility: Companies can expand their reach to new audiences and improve accessibility by offering their content and services on multiple platforms and devices.
Digital content requires compatible data mediums for different hardware upgrades. High-end 3D software requires high-performance hardware, including faster processors, large memory capacities, and high-speed networks. Developers must optimize and secure this hardware, despite physical constraints in chip manufacturing and battery life. Integrating new software or replatforming existing software can be difficult due to scalability, connectivity, interoperability, and limited third-party library support. However, these changes can offer benefits such as shorter development time, exposure to more users, backups, synchronization, and performance updates. Developers must carefully consider all factors when integrating new technology or replatforming existing software to ensure success. This includes balancing technical challenges, compatibility with existing systems, and implementing cybersecurity measures. In an environment of thin margins, the wrong technology investments can have long-lasting strategic and financial implications. Therefore, it is essential to conduct thorough project scoping, accurately estimate the required resources and avoid underestimating the resources required. Additionally, embracing digital transformation and self-organization can foster adaptation and drive innovation, leading to valuable insights from unexpected sources.
Questions arise: " How do organisations articulate and realize their social media marketing objectives? What are their objectives, e.g. brand building, attracting advocates, increasing sales, enhancing visibility, cultivating community communication? Are objective short-term or long-term? "; How can client firms work effectively with their portfolio of marketing agencies to integrate SMM into the wider marketing communications? How is campaign success measured? Can different styles of communication be characterized, and if so, are some more successful for marketing than others? How can it contribute to the authenticity and trust that consumers place in the brand? What factors affect the success and impact of endorsement?
Technology has a direct impact on the company’s ability to keep production moving efficiently. Even the best architecture, the most perfectly suited for the job, will be essentially useless if the people who need to use it do not know what it is; cannot understand it well enough to use, build, or modify it; or (worst of all) misunderstand and apply it incorrectly. And all of the effort, analysis, hard work, and insightful design on the part of the architecture team will have been wasted. Creating an architecture isn’t enough. It has to be communicated in a way to let its stakeholders use it properly to do their jobs. If you go to the trouble of creating a strong architecture, one that you expect to stand the test of time, then you must go to the trouble of describing it in enough detail, without ambiguity, and organizing it so that others can quickly find and update needed information. Documentation speaks for the software architect : it is very difficult to determine the architectural characteristics of the application without fully understanding the iner-working of every component and module in the system. Basic questions about deployment and maintenance are hard to answer: Does the architecture scale? What are the performance characteristics of the application? How easily does the application respond to change? What are the deployment characteristics of the application? How responsive is the architecture? - When choosing technologies, budget should be considered in terms of both cost and potential return on investment, including scalability, profitability, and efficiency. Scalability is particularly important, as critical applications must be able to handle increasing traffic and data demands without compromising availability. In addition to selecting the right technology, it's important to ensure that stakeholders understand it well enough to use, build, and modify it effectively. This requires clear documentation and communication to avoid misunderstandings and ensure that the architecture can stand the test of time. Choosing the right programming language and resources to develop a program depends on several factors such as the nature of the program, the skillset of the developers, the scalability and efficiency requirements, and the budget. It's important to consider the long-term impact of the technology choice and its potential to yield a positive return on investment. When it comes to scaling critical applications, it's important to design the architecture and infrastructure with scalability and availability in mind. This involves using technologies and practices such as load balancing, caching, clustering, and fault tolerance. The architecture should be designed to handle increasing traffic and data demands without compromising the quality of the user experience. Another important factor is ensuring that the human resources involved in the development and maintenance of the application have the necessary skills and knowledge to handle the technology stack. This may require training, hiring new talent, or outsourcing certain aspects of the development process. Overall, the choice of technology and resources should be made with a focus on maximizing efficiency, scalability, and profitability, while also ensuring the quality and reliability of the application. While small business owners understand the value of new technologies, I see how they still struggle with choosing the right products, as well as the right time to adopt them to have the greatest impact on their business. How do you choose the language with which to write your program or choose resources ( human and technological) ? Budget should consider two key things: cost of the technology and implementation, yield in a concrete return on investment. ( scalability / profitability and efficiency). Companies struggle to scale critical applications. As traffic volume and data demands increase, these applications become more complicated , exposing risks and compromising availability : applications that can handle huge quantities of traffic, data, and demand without affecting the quality the customers expect ; In order to prevent an application from becoming slow, inconsistent, or downright unavailable as it grows, scaling isn’t just about handling more users; it’s also about managing risk and ensuring availability. Different software packages offer diverse approaches to graphic design as well as some of them being free and some of them being paid for, Most of the programming languages are open enough to allow to do things multiple ways , for a similar outcome. For instance, the implementation of XR Hardware requirements may vary according to network latency, speed of CPU colors, use of proxies, among other factors. The Problem of PC, Mac, and Linux headsets - is related to Technical Specifications : It's well known that PC type headsets will perform based on how well your PC performs. This is where a Mac has a bit of a disadvantage; you need a fast video card, and Mac's typically are fast enough for graphics and some game play, but not for VR. Apple PCs are just adding the capability to add an external video card for VR, as most of them simply do not have the video processing capability to render VR at the resolutions used by the Vive and Rift headsets. However, Apple has introduced VR ready PCs that are most likely to be performance stable with mixed-reality or 360°video . PC hardware does give a better VR experience. To avoid VR sickness, you need a fast frame rate - This is how fast your computer can generate the images on the screen. A lot depends on the complexity of the scene. Mobile controllers are only 3 Degrees of Freedom ( DOF ). This means they track tilt, yaw, and roll, but not position; if you move the controller flat to your left, on the game your controller hasn't moved at all. This is why you can't grab things with a mobile controller. The Vive and Rift Oculus both have 6 DOF controllers, so you can move them around and grab things. If you wonder what is the optimal resolution that yields the highest clarity in VR360: The final clarity in VR360 is up to a number of factors, from the quality of source video resolution, fps frame rate , Mbps bit rate, dynamic range, compression ,rendering pipeline; Latency , screen picture resolution and structure. The reason why its so important to document on the Integration testing with a chart benchmarking toolkit - and why you need 8K 360 videos for VR headset like Oculus Go, Vive Focus and Samsung Gear with S8 or S9 with Snapdragon chips on a 3k screen ; Understand when to use simple sampling or multiple sampling, and how it will affect your video quality.
OPINION: Six Things Brands And Agencies Need To Know Before Making Augmented Reality Campaigns .
OPINION | 2020: : Is virtual reality the next channel for digital marketers?.
RESEARCH | 2020: Novel math could bring machine learning to the next level
Users dislike slow applications, but broken applications, due to transient networkerrors, are the worst experience of all: Your mobile application must be robust in face of common networking failures; unreachable hosts, sudden drops in throughput or increases in latency, or outright loss of connectivity.Althought, High-quality Images can make a website stand out, unfortunately it comes with a price: Due to large file sizes they’re bulky to download and result in slow page load times. If you’ve ever been on a device with a poor network connection, you’ll know how frustrating this experience can be. The image format WebP - developed by the Google team - is a solution , with 26% smaller than PNG images and around 25–34% smallerthan JPEG images - A decent savings, where the image quality isn’t noticeably affected . Another image format that will become usefull in future browsers is Scalable Vector Graphics (SVG) - It's not yet an universally supported format , but it is very powerful. Unlike the other image formats, SVG is vector based, which means it is totally scalable without quality loss. You can reduce JPEG, GIF, or PNG in size, but when you artificially make them bigger they lose quality and appear pixelated.
Over time, 3D game development languages have evolved to meet the demands of VR, which emphasizes physical immersion and requires a high frame rate and resolution. To achieve this, VR programming is typically done in high-speed, low-level languages like C++ (for Unreal and CryEngine) or C# (for Unity Engine), which can generate 90 frames per second. Modern engines handle tasks such as rendering, physics, terrain, lighting, and AI to create realistic and sophisticated virtual environments, as well as networking to build multi-user experiences that can run on mobile platforms. To ensure smooth VR experiences, it's crucial to have up-to-date hardware since low frame rates and stuttering can cause cybersickness. As developers continually push the limits of acceleration, FOV, refresh rates, and FPS, investing in expensive VR PCs becomes necessary. Even factors such as user height can contribute to a nauseating VR experience. To explore the latest VR games without experiencing cybersickness, it's recommended to have the latest CPUs, GPUs, and RAM. Nvidia GPU users can use the GeForce Experience app to optimize their PC for each game, while AMD GPU users can use the AMD Gaming Evolved app for the same purpose. Additionally, not everyone can afford to invest in expensive VR hardware, and it is important to make VR accessible to as many people as possible.What Makes VR Interesting? - It offers a level of immersion and interactivity that is unparalleled by any other medium. VR can transport users to virtual worlds, allowing them to explore, interact with objects, and even engage with other people in ways that feel almost as real as the physical world. VR also has the potential to be used in a variety of fields, including entertainment, education, healthcare, and more. It can provide new ways of experiencing and understanding complex data, simulating real-world scenarios, and training individuals for high-risk or complex jobs. Additionally, as technology continues to advance, VR is becoming more accessible and affordable, making it an increasingly popular and exciting area of innovation. “A medium composed of interactive computer simulations that sense the participant’s position and actions, vand replace or augment the feedback to one or more senses, giving the feeling of being mentally immersed or being “present” in the simulation” [Sherman and Craig, 2018]. - virtual reality is a medium, which means it is a means of conveying ideas from person to person, or people to people: from creators to recipients. Furthermore, the medium itself is a filter on concepts as they are conveyed, and thus has a higher, over-arching influence on consumers of the medium — “the medium is the message” as McLuhan has proclaimed [McLuhan, 1964].
Image collage from cyberpunk 2020 and VR Sickness: What it is and How to Stop ItARTICLE | 2020 : WHAT IS VIRTUAL REALITY SICKNESS?
Simulator sickness, also known as VR sickness or cyber sickness, can produce a variety of symptoms including sweating, nausea, headaches, and drowsiness, which are similar to the symptoms experienced by motion sickness sufferers in cars, boats, or planes. The prevailing theory on the cause of VR sickness suggests that it results from a mismatch between a viewer's actual experience and what their brain perceives. According to Kolansinski's (1995) Conflict theory, the brain perceives that the input (such as sight or sound) is not genuine or accurate. Normally, the mind detects when something is wrong with the body and attempts to correct it by performing safety checks to ensure that the body is in a healthy state. The detection system searches for cues or hints of abnormalities, such as linking movement and sight to compile a body of information that can verify the accuracy of the experience. When the cues do not align, the brain goes into a defensive mode that is typically triggered when the body is under attack. A rejection system is activated, as the brain attempts to remove any perceived poisons or threats from the body. Stable frame rate is essential for VR content - as it’s rendered twice - it needs to run at 90 frames per second instead of 60: you may need to double the amount of objects to render and choose 150% faster than traditional PC games ; even the smallest hiccup can cause an uncomfortable feeling for the player : * Before optimizing your Unity projects, it is crucial to identify the areas where optimization is required. Unity provides built-in analysis tools like the profiler and frame debugger, which can help you understand why your game or experience is taxing the CPU, GPU, and RAM. * To improve performance or quality, adjust the render scale during runtime by following these steps: First, select Play Area under [VRTK_SDK_MANAGER] and [VRTK_SETUP]. Then, add a VRTK_Adaptive Quality component and set the Scale Render Viewport's minimum to 0.5 and maximum to 4. This component automatically adjusts the viewport's resolution based on the performance level, decreasing it when necessary and increasing it when the machine has sufficient processing power. * * To reduce the number of draw calls, utilize the single pass stereo rendering technique which improves upon regular single pass rendering. By using GPU Instancing, it can cut down on the number of draw calls needed. This technique is currently only available on Windows 10. To enable it, go to Player Settings by selecting Edit ▸ Project Settings ▸ Player from the top menu and access the XR Settings. From there, select Single Pass Instanced (Preview) in the Stereo Rendering Method drop-down to switch rendering techniques. * Unity uses a method called multi-pass rendering to display VR content, which means rendering the full view twice for each eye, resulting in a significant performance impact as it doubles the number of vertices processed. However, since version 5.6, Unity has introduced more efficient techniques such as double-wide rendering. This method allows Unity to render both left and right eye images simultaneously into one texture, reducing the workload on the GPU and saving on CPU and GPU time. As a result, using this technique can improve performance by around 20% less CPU time and 10% less GPU time. * To achieve a smooth framerate in VR, it is crucial to optimize the geometry, textures, and materials. Quality textures are more important than the number of polygons displayed. Lossless formats such as PNG or BMP are recommended since Unity handles compression for the output platform. Mesh Baker automates this process. For efficient culling, small, static level geometry should be separated into different GameObjects. To optimize culling, open the Occlusion window by selecting Window ▸ Rendering ▸ Occlusion Culling from the top menu and adjust the Smallest Occluder value to 2 in the Bake tab. Enabling GPU Instancing on materials can significantly improve performance by allowing Unity to render multiple copies of the same mesh at once, which can save thousands of draw calls. * To avoid VR sickness, use these technical options: (1) Limit movement speed or use teleportation instead of fast movement. (2) Fade to black and teleport players to the ground instead of showing free falls. (3) Opt for natural methods of locomotion, like hand swinging or teleportation, instead of direct movement controlled by a joystick. (4) Fix performance issues and use automatic resolution and effect adjustments. To make players feel comfortable, optimize performance - attending the CPU and GPU cycles, use fade or cut to black when teleporting, add a vignette around the view's corners when moving fast, put players in a vehicle cockpit, and allow for direct movement because of the way human brains work, it’s natural to move and rotate quickly when you’re inside a vehicle, and you can allow direct movement.Thinking about a Cross-Platform Problem : As leading mobile platforms like Apple and Google are continuing delivering new and improved AR toolkits for developers: Google offered ARCore for Android developers, while the iOS platform also offered ARKit loaded with enhanced AR-based features for building immersive AR applications. These are example of platform-specific that blocked scaling compatibility between different models and don’t allow cross-platform rendering.
Thinking about a A Cross-Platform Solution : Computer-mediated reality Cross-Platform Library [ ViroReact ] , - npm install -g react-vr-cli - the Cross-Platform Library for React sollution - Is a free and open-source for an immersive experience, that offers effective documentation on Native AR Apps, that can utilize a single code base for developing and deploying AR/VR features on both iOS and Android versions of the app. React gives support for both front-end and server-side ( high efficiency, reusability,higher speed, agility and responsiveness of web app, friendly user experience to hybrid space ).
React VR apps are written in JSX (JavaScript eXtension ) - a syntax that allows HTML-like tags to be mixed into JavaScript code. React VR is based on React and React Native. A-Frame apps use HTML, with custom HTML tags - a powerful framework, providing a declarative, composable, reusable entity-component structure for three.js. A-Frame can be used from HTML, although developers still have access to JavaScript, DOM APIs, three.js, WebVR, and WebGL. Both of them allow for custom JavaScript code and interfacing directly to three.js and WebGL. Machine learning has revolutionized cybersecurity, enabling novel attacks to be detected and blocked. All of these technologies, if not secured properly, could absolutely lead to the theft of sensitive information ( Integration and Security ). The influx of online traffic underscore the need for organizations to address IT resilience — a company’s ability to handle a technical disruption. Resilience is the ability to withstand adversity and bounce back and grow despite life’s downturns; empowers people to accept and adapt to situations and move forward; Quantum machine learning, quantum simulation, and quantum-inspired computation are generating a lot of interest and now have industrial applications. Between hype and revolution - On the potential applications of quantum computing in vaccine discovery , it has helped in the management of COVID-19 in terms of spread and the development of therapeutics. Realizing the power of quantum computing, we are more likely to witness its application in many industries in 2021 and beyond. Qiskit [kiss-kit] is a software that sits between quantum algorithms and the physical quantum device . It translates common programming languages like Python into quantum machine language. - an open source SDK for working with quantum computers at the level of pulses, circuits and application modules. - There’s still a large gap between the computational resources available on current hardware versus the resources required for some of the oft-promised applications of quantum computers, such as integer factoring and molecule simulation. An important aspect of software optimizations for extracting the best performance from a given application-device pair is “qubit mapping”. Software architects need to write programs that take into consideration the hardware, connectivity, and available gates to “map” program-level qubits to on-chip qubits as quickly and efficiently as possible. To see where quantum technology plays a role it is important to spot where classical computing falls short : " Classical and quantum computers are the computers we already know and use. Formally known as binary digital electronic computers, they operate by representing information as sequences of zeroes and ones ("binary digits" or "bits") and processing them with devices based on the physics of electrons ("electronic computers"). Each bit can hold one of two values: "0" or "1"; there are no intermediate values. Electronic computers execute algorithms on these bits using simple logic operations (AND, OR, NOT, etc.) to form useful results. A new class of computing devices was proposed at early 1980s, involving quantum bits ("qubits") instead of bits. Unlike bits, qubits can be in a combination of states, and so hold a superposition of "0" and "1" states at any one time. As the number of qubits increases, so too does the number of states simultaneously held by the set of qubits. Qubits are processed using quantum computers. These execute algorithms using quantum gates, which are logical building-blocks that operate on all possible states of a set of qubits simultaneously. Once a quantum computation is complete, the output is measured, which causes the multiple entangled superposition of states to collapse to a single classical state. Quantum computers with many qubits are theoretically capable of operating much quicker than any classical computer. Quantum computers are not a replacement for classical computers -- both have their strengths and weaknesses. However, quantum computers are fantastic for solving particular mathematical problems where classical computers struggle, so a problem that may take a classical computer quadrillions of years to solve will take a sufficiently powerful quantum computer just a few days. Several types of encryption rely on these kinds of mathematical problems, and so "cracking" many types of encryption will be possible with quantum computers." Explainer: What is post-quantum cryptography ? How Quantum Computing Will Transform Cybersecurity ?Over time, developers are considering blockchain based systems for the future - alternative systems, allowing information storage; their advantages and disadvantages, as well as determine where it is better to use and where alternatives may offer a better solution. It will play a central role by capturing user activities at the conclusion of a track play - analyzing the play for legitimacy or fraud detection, then applying a series of network and individual influenced coefficients. This will derive a reward value using the credit system within Current's platform. The question remains: "why would a developer want to build on EOS and blockchain in general vs the cloud or some other centralized alternative?" The main objective of the Current protocol is to facilitate transfers of value between media services by partnering with media networks. Due to the increased Video traffic that will account for 82% of all Internet traffic by 2022 , the 4k / 8k and 360 video format require more transcoding space for processing power in a better , cheaper, faster and more optimized way, throughout a strategic transformation of technology . To enable content owners to deliver ultra-low latency, high quality video content – and reduce their reliance on complex and costly transcoding services, cloud storage providers, and aggregators, Eluvio has Launched Content Fabric for Internet Video Distribution. While creating Audiovisual Projects I found software knowledge indispensable to deploy certain parts of my applications .Module, extensions and Plugin extends the capabilities of a larger application. There is a difference between a framework vs library - already written by other developers , created for the rapid prototyping and development of their works . A library is a collection of subroutines or classes used to develop software. I think any component that instantiates its own database falls outside the scope of a library. A Plugin is a ready to use software component that can be added to an existing one to change adds features. There are libraries for various functionalities: such as computer vision, as well as motion detection devices like Microsoft Kinect camera / Leap Motion. Toolkits that are used (and often created by) creative coders are: Cinder GNU Octave ; Max MSP ; openFrameworks ; Processing ; Pure Data ; Vvvv. ----> It´s important that a professional has the ability to diagnose and solve problems that happen or could happen in the infrastructure of companies: " How to combine the objects of MIBs (, ) SNMP ( ,) so that they offer interesting information about the network? What are the thresholds for each interesting performance index? How to get to the problem that is occurring on the network? What are the suggestions to solve the problem? How to prevent this problem from happening again? "--- all these are questions asked by network management teams - so I updated my knowledge in helpdesk to understand the responsibilities towards user complaints about technology features of information, fix problems, passing on to other technicians. As a technical support team I encountered ultimately responsiblility for the network maintenance and configuration. As a network operator I practiced responsibilities for receiving the alarms generated by the management station. There is also the team manager skills, who directs and monitors the performance of team members. Over time, you'll find that it's easier to test some problems when others have already been tested. For example, some board diagnostic tests network tests involve testing communication with the remote equipment. And therefore, It is more interesting that the network cable connected to this card is tested before the card. In work environments, It's possible that one or two people just form the team and accumulate for yourself all the papers.
Throughout professional practice - I encounterd new challenges while editing, transcoding, decoding, rendering, and delivering tv broadcast video content - Assuming the importance of Containers as it hold metadata on media in the file. That metadata can be as simple as the frame rate of the video ; to showing what camera and lens were used to record the footage; what settings were applied; where it was shot and information about the specifications of shot and the production. The metadata within a container can sometimes also tell what standards the footage was produced in and complement the camera shot list / shot designer : storyboard - planning documentation.
Tracking and sorting the RAW and Log video format is the alignment made before Rough Cut and processing the color correction . These are necessary steps to neutralize the image profile, even before it actually begins editing the colors with color grading. It will deliver much more time in the grading steps. The important thing is to identify the file information and the type of compression and codec you use. So you can know in advance the decision making between: if you will be able to work enough grading or if it will be limited by technical specifications. The question about “ What people look for in video editors ?” usually resumes in: - Lift-Gamma Gain; Shadows - Midtones - Highlights ; Blacks - Mids - Whites. By adjusting the blacks first you set your baseline to adjust the rest of your image, you will notice that adjusting the black's and whites will affect your waveform as a whole, so it will be a matter of balance finding the right point between the two. Midtones don't affect blacks and whites, so you leave them last. If you climb the midtones will lose saturation in general in the image, then to compensate, increase the saturation a little to keep the colors pulsating. A well exposed skin usually gets your IRE between 60 and 70 in the waveform, be careful to "pull" the midtones a lot, because they usually bring a lot of noise to the image, but as in every variable in the cinema, the more knowledge and experience you have towards photography, the easier it will be to opt for a slightly higher or lower IRE depending on what search or even the camera and format in which it is Recording. This area covers several others, interspersed in post-production, and talking directly to the narrative desires of the director and director of photography. The colorist is a technical artist of the film and the marriage between his knowledge of formats, displays and scopes intersperses with his artistic ability to accentuate the tone of the scene. Color correction - is the technical and mechanical process. The use of scopes is essential (Waveform, Vectorscope, Parade). Even if you don't have a calibrated monitor, which can be very expensive, if you trust your scopes, and work right with them, you can guarantee a great technical result. Waverform gives you all the information about the luminance in the scene, Vectorscope gives you the information about chrominance and Parade shows you the red, green and blue values separately. The entire video clip receives manual adjustments to get good exposure and balance of lights, each clip is adjusted to a certain color temperature set previously for the scene. Also monitors or even your eyes - end up adjusting to the light and color of the environment. The steps of color correction can directly affect the maintenance of information in the image, the ideal would be to follow the following order of operation: ---> Remove artifacts and use de-noise; Balance your plans by black/mid/white, saturation and white balance; Re-illuminate within a plane using power windows or masks; Add gradients, diffusers, and other lens filters; Add vignettes; Grading the images; Simulate any film you want; Resize and add detail. <------ You don't have to perform all of these steps in each plan, but if you're going to perform them, that would be the ideal order. Color Grading is the creative process where decisions are made to establish or create the desired climate for scenes through software, such as: accentuating certain colors, emulate styles, among other choices. Being something purely creative there is no right or wrong way of doing things - just what the director of photography, colorist and director want for the scene. The challenge of making the right choice, the tools available are several and powerful, the question is how to use them accurately for what the film asks. Nowadays it is possible to create or download LUTs files for cameras like BlackMagic and RED for free on the internet so you can practice your colorist skills in DaVinci Resolve (also free) or even in video editors.Working as an editor, taught me the importance of security: exporting files into the right format - the container format of a video file, an audio file & metadata -, and how it affects the video broadcast or Video Streaming Protocols “container” or “package” that’s used for video transmission - storage and playback -: understanding the need of an MXF metadata wrapper - primarily exchanges pictures, sound (synthesized audio or MIDI), and generally static elements (still graphics or text) as well as data items (teletext or closed-caption files) - along with a small amount of metadata to take advantage of the benefits of converting video formats. I Questioned on how to support multiple protocols simultaneously - "What protocol to use when sending video to multiple devices simultaneously?" -, transcoding and transrating; also on how to produce different resolution and quality streams to deal with different bitrates and decoder/players justify multi-channel encoding -adaptive bitrate encoders which produce multiple profiles for compatible destinations to choose from, and transcoding media servers–which are software and services that let you manipulate and multiply your source video streams to suit your application. Different protocols are designed for different applications - Streaming protocols allow video that has been encoded to subsequently be transported, either in real-time or at a later time. Protocols affects rather how a user/viewer might interact with the video, the reliability of delivery of that video stream, or which devices/software players can access it. Some protocols can only be used by specific vendor hardware, significantly reducing the interoperability and potential reach of that content; Latency will be a key component as protocols that are used across cloud or public internet may be different than protocols used for facilities AV infrastructure over IP. There are five common streaming protocols that professional broadcasters should be familiar with, including HLS, RTMP, SRT, MSS, and MPEG-DASH. Assume an organization has added new equipment capable of generating very high resolution, such as 4K, using codecs that produce a small-enough bandwidth might be enticing. But the codec and/or encoding profile used directly from the source to mitigate its bandwidth use may not match what is the optimal codec or encoding profile for content distribution at large. Transcoding can be expensive - Archiving the highest resolution content to avoid storage costs doesnt resolve technical issues. Distribution always requires well-established technologies for maximum compatibility and reach.
Streaming Protocol vs. Codec vs. Container Format: Logically, different streaming codecs are used for different purposes. MXF is a container format that the stand-alone Adobe Media Encoder can encode and export movies. Premiere Pro can export MXF files containing the MPEG-2 essence items that comply with the XDCAM HD format or other broadcast media. All Apple ProRes codecs support all frame sizes (including SD, HD, 2K, 4K, and 5K) at full resolution as the data rates vary based on codec type, image content, frame size, and frame rate - applying the least compression for the best imagery and resulting in the largest files can be expensive overtime. So the question before choosing the protocol or format should be "Why transcode when it increases file size significantly?" It Only makes sense when you need to convert to an editing codec or deliver in a specific codec (e.g.; iPhone, Vimeo, WMV, etc); The editor is forced to convert highly compressed and not deigned for editing memory cards content to an edit friendly codec, which is usually much less compressed and that means larger file sizes on compatable hardware that actually supports it - updated disks and pc. Today, adaptive bitrate streaming technology automatically detects users’ bandwidth and computer processing availability in real time and provides a media stream that fits within these constraints. In enterprise and media and entertainment encoding, means that video sources are often sent at their maximum quality and resolution profile, but the local encoder and/or streaming server also create additional stream copies of the source in reduced settings. A 4K source, for example, can be kept in 4K and decoded at an appropriately powered viewing node. But the same 4K source can comfortably supply the same source content onto tablets and smartphones that often have a lower resolution screen anyway and the corresponding reduced resolution stream is served to match what the wireless network and processing power of these wireless devices can handle ( “scaling” of video sources). In fact when a video is encoded into a certain format, its data is compressed in order to be stored while consuming less space. For example the newer HEVC codec can dramatically reduce the file size of H.264 videos – by up to 50%. The M2T file extension - respondes to the technology standards between the studios / broadcast, for faster transfer between media servers - as it was mostly used for broadcasts that were integrated with ATSC (Advanced Television Systems Committee) standards - Developed for satellite broadcasts as well as terrestrial broadcasting applications where bad signal issues are common, the stream synchronization technology and error correcting features that are embedded into these .m2t files help improve the overall quality of .m2t streams. These m2T files usually contain digital motion graphics or animations, audio data, sound clips and effects, 2D and 3D graphics as well as text content for sub titles and so on. The video content stored in these .m2t files has a maximum resolution of 1080i. This understanding makes the exported video file compatible with a wide range of playback systems that rely on the M2T or MXF standard for broadcast and archiving ( Digital Cinema Package (DCP). Ideally it should have hardware support, which can sometimes be a problem as it takes time before newer codecs are supported at a hardware level.Video Encoding: The Definitive Guide [Updated for 2021] - What Are the Benefits of Different Video Formats? Video compression became an important area of research in the late 1980’s and 1990’s and enabled a variety of applications including video storage on DVD’s and Video-CD’s, video broadcast over digital cable, satellite and terrestrial (over-the-air) digital television (DTV), and video conferencing and videophone over circuit-switched networks. Nowadays, despite what has just been referred to - when dealing with camera raw images , Apple ProRes 422 LT is the most used norm intended for offline workflows - because it requires low data rates but full-resolution video. Even though, it solves some offline production issues - on the other hand, this is a real problem for the cloud or online video streaming , specially when there are more than one person accessing the same data ( server overload): An average 90 minute Apple ProRes 422 HQ is around 150GB ; This means it's going to take several days to upload, with the computer processing the upload uninterrupted all the time (running day and night). Real networks are heterogeneous in rate : streaming video from home (56 kbps) using modem vs. corporate LAN (10-100 mbps) ONLINE LIBRARY 2020 | Next‐Generation Video Coding and Streaming Image footage : Block diagram of video coding simulation. The reason you can't edit the colors as you want in a file made on your DSLR is because the image has already undergone a high compression to be recorded on the SD card, making the number of exposure stops (- unit of measure of light during the exposure of a scene -) is far from ideal for color editing. F-Stop is basically a mathematical measure to calculate the amount of light that passes through the lens based on your design. The T-Stop is the measure of light passing through the lens according to the light transmission rate it reaches, basically it would be the F-Stop divided by the amount of light passing through the lens. DSLR's don't handle overexposures well (when there's too much light in the scene), it was very easy to lose information in high-light areas. All these small features of each camera, lens and recording format end up becoming noticeable to professionals over time. This makes them able to identify nuances between different cameras, sensors and even lenses just by checking an image. A file with a lot of compression ends up drastically limiting the possibilities of editing in its light and color levels, it is important to say this so that you do not feel frustrated when trying to learn these steps with files that have already gone through several compressions. What raw recording, and some Log variations, brought was exactly the possibility of achieving a higher dynamic range, but this was occurring step by step, according to the evolution of sensors, processors and recording cards. Usually companies provide ready profiles with a contrasted look and that is even beautiful on camera, but when checking on your monitor you will notice blacks without any information and white bursts. As professionals the ideal is always not to appeal to these ready-made solutions and work the image according to what we need, capturing it flatly to define your look at the time of grading. Esthetic Results with the camera and post production, need to be precision making decisions : To make smoother footage (photorealistic) do not add footage in the timeline editor where the raw footage time frame rate is higher than the own footage. For example, if you have a timeline that is 30 frame rate / per second , do not add a 23.976 fps because you’re going to deal with the editor in creating magically out of thin air, more phantom 6 frames that might damage the raw footage. You can combine clips with different frame rates in sequences (mixed frame rate footage) - first Open a clip in the Source Monitor to view the original frame rate and frame size : -- How to deal with different frame rate clips on the same timeline - whenever you import a clip that was shot in slow-motion ( 60 or 10fps ) . " Scenario #1 ---- What will happen if you decide to transcode into ProRes? 50 hours of ProRes 1080 HQ equals to 3.78 TB. So, if you want to keep your original H.264 files and you decide to transcode them into ProRes HQ file and want to have at least 2 back up copies of all of the material you will need: 1TB (H.264 raw files) + 3.78TB (the ProRess transcodes) x 3 ( for the back up)= 14.34 TB of hard disk space total. In this case, if you shoot with a single camera you’ll need 7.17 TB total.
Virtual infrastructure : cloud infrastructure and virtualization solutions and services.
Network Virtualization is Different from Server Virtualization ( this, plays a significant role in cloud technology / Cloud computing) : at the physical level, network operation depends on specialized, shared programmable hardware - packet forwarding hardware - and associated real-time software . The flexibility of using the cloud endeared corporate consumers, for the following reasons : On-Demand Service – use it when needed. This provides some degree of freedom for the customers. Network Access – utilizes the internet and can be accessed using laptops, workstations, and smart phones. Pooling of Resources – resources are pooled to provide customers customizable variable costs based on business size. Scalability – scale up or down based on your current needs. Cloud Computing Service providers offer different service models according to the customer’s needs. The service models are called SaaS, PaaS, and IaaS. - models that are often depicted in a pyramid-like structure Example of SaaS models – Banking applications, Social media apps, Slack, App-based games Example of PaaS models – SAP, AWS Elastic Beanstalk, Google App Engine, Apache Stratos (Hou, n.d.) Example of IaaS models – these are more for system administrators and include Rackspace, Google Compute Engine (GCE) "As cloud computing adoption has grown, Cloud Networking has emerged adjacently." - Cloud computing is best to access from outside the office network ( data mobility) - good for selling your service/software to external users and works on IaaS (Infrastructure as a Service) - but the # Disadvantages / risks and benefits of Cloud Computing should be considered: #1. Risk of data confidentiality. There is always a risk that user data can be accessed by other people; #2. Depends on internet connection - the weakness of public cloud is where everyone accesses the same server and server and will increase the risk of attack, and down the server; #3. The level of security ( secrecy / data privacy and confidential matter ) / Technical problem. - When you experience a problem, you cannot sue the server for errors in the data. When you experience a problem, you cannot sue the server for errors in the data; #4. Compliance - While many websites offer an API, they are often expensive or have very strict rate limits, even if you’re working on an open-source and/or non-commercial project or product ; #5. Vulnerable in the event of an attack - exposed to wide open opportunity for attacks on data or activities stored on the server. #6. Low Connection - connection can get slow - harmed because of poor server quality; ; while Virtualization meant to access from office only - is best for setting up Data Center within the company network/infrastructure based upon Service Type SaaS (Software as a Service) . Server virtualization in contrast is a local issue - the interface between a software work load and a virtual machine. With virtualization, company’s can take advantage of the efficiencies and agility of software-based compute , data migrations ( migrate to the different hardware platform ) , Encapsulation of data and storage resources. A virtual network is distributed software and hardware with a more moving File systems - distributed storage in abstracted ways already, not as raw hardware. Virtualization allows businesses to simulate hardware - Increase efficiency for development & test environment - such as servers, storage devices, or network resources. The virtualized components function exactly the same way that traditional hardware would function—with added benefits to the user and the business streamline or data collection Flexibility to access it from anywhere . ARTICLES | 2020: Serverless Architectures - The “Big Three” cloud vendors—Amazon, Google, and Microsoft—are heavily invested in Serverless, and we’ve seen plenty of books, open-source projects, conferences, and software vendors dedicated to the subject. But what is Serverless, and why is (or isn’t) it worth considering? Serverless architectures are application designs that incorporate third-party “Backend as a Service” (BaaS) services, and/or that include custom code run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform. By using these ideas, and related ones like single-page applications, such architectures remove much of the need for a traditional always-on server component." ARTICLES | 2020: On Project scalability and Prophit : Serverless is not right for every project . The enterprise is left with the responsibility to secure the application layer: managing and monitoring access to the application(s) and the data, enforcing legitimate application behavior, monitoring for errors and security incidents, and so on. Serverless applications are accessed infrequently and may need extra time before enough instances of virtual containers and databases are able to respond in a timely fashion. "The codec story is always the same: everyone always wants to get better-quality video in a more efficient state, but the challenge is getting people to adopt it. " - commented Ben Wood, at the CCS Insight consultancy | July 2020 | while "The codec - which is called both H.266 and Versatile Video Coding (VVC) - - was announced by Germany's Fraunhofer Heinrich Hertz Institute . Although 8K televisions are becoming more common, there is currently little content available in that resolution : "Apple, Microsoft, Qualcomm, Ericsson, Intel and Huawei were among those that had worked on its development. H.266 is designed to require half the bitrate - the amount of data transmitted per second - as today's standard H.265. H.265 requires about 10 gigabytes of data to transmit a 90-minute ultra-high definition [4K] video," explains a press release. "With this new technology, only 5GB are required to achieve the same quality."; Imagine how much bandwidth does VR need: "Some experts believe VR headsets will only be able to match reality when they can display 16K or higher resolutions; Meanwhile, Google has previously opted to use a format it developed itself - VP9 - instead of H.265 to encode YouTube videos. And a consortium - which includes Amazon, Apple, Facebook, Google and Samsung - has already developed a successor, AV1. The video-game streaming platform Twitch has already said it intends to switch "100%" to the format by 2024." Codecs and containers are frequently confused and mistakenly used interchangeably - especially when transfering content data in terms of scability to the cloud or web. It's really important to understand the requirements and technologic requisites of the production - It's something that you figure out by yourself the hard way, after testing on different hardware and software, because they don't actually teach this at school due to it's depht complexity . Reading and sharing knowledge with other technician helps you understand it better . INSTAGRAM AND FACEBOOK DOESN’T SUPPORT 4K VIDEOS TO ALL USERS , EVEN IF IT READS THE EDITED FOOTAGE AND UPLOADS IT , WHAT IT DOES IS AUTOMATE DATA ALGORITHM COMPRESSION - UNDERSTANDING THE BIG PICTURE IN TERMS OF SOLUTION - Perhaps some day when we have increasing-light-speed internet connections, we won't have to worry about these 'implementation details' - Besides reducing power consumption of video encoders becomes a prime design requirement, there's a need to have a structure that allows frame synchronization to be performed independently from the technology inherent delays, resorting to a parallel data pattern detector and a bit alignment shifter. Frame synchronization is becoming more and more problematic as communication systems increase their transmission rate .The recent example of Netflix and other services such as Facebook, who agreed to reduce the quality of its streaming service in order to help internet infrastructure cope with the increased traffic caused by the Covid-19 coronavirus - reveal how this issue will be more problematic over the years, as it obviously anticipated the increase in remote work, this will likely have broad implications on Internet speed ( lower upload Mbps - Megabits per second) and economy — particularly consumer spending. -- However, until then, quality-to-filesize ratios will always be important, particularly in the age of 4K and ever-increasing screen resolutions. If someone tries to upload lossless on a less-than-amazing internet connection, they'll get frustrated and quit. As displays increase in size, compression techniques become more efficient, playback devices become more sophisticated, and internet connections improve, so will the quality 4K videos . The fact they have a larger file size, means they’ll require more bandwidth and storage space to upload - so it’s not available to all users. Have you ever questioned : why software editing programs or web app - simmilar to instagram or facebook - take so long to upload / read a video and why it crashes so much or even overheats- mobile phones and pc çomponents? - as you work with Higher resolution ? Equivalent incompatible issues, also happen to disks that don't support 4k ( ex : 5400 rpm or even 7500rpm ), when pushed to performe more than they can handle, overheats the hardware - crushes - damages data components or gets you stuck into a viral loop. The truth is that Higher resolution, requires more space and larger profit margin to maintain it and not every hardware can truly support it's scalability : you will need more drive and server's space - this can be a huge issue when creating online web sites ( quality time and product vs profit /upload quality versus upload speed : ) - It's also a time consuming experience. Actually downscale 4k to HD will increase image detail without loosing quality - after considering a set of resolutions, bitrates and settings used for high-quality video encoding, and the reasoning behind those choices. - high-resolution HD images can carry more detail than their lower-resolution SD counterparts. 10-bit images can carry finer gradations of color, thereby avoiding the banding artifacts that can occur in 8-bit images. The 3 key properties of digital images that contribute to image quality—frame size: chroma sampling, and sample bit depth— while offering industry leading performance and quality at each supported data rate. The role of a codec is to preserve image quality as much as possible at a particular reduced data rate, while delivering the fastest encoding and decoding speed. At the end of the day, you will always have your raw image metadata backup data protected and valuable - Intellectual property, patents and licensing play an increasingly important part in the development of video coding technology and applications.
BOOKS & ARTICLES | 2020: GPU VS CPU For Video Editing
VIDEO NETWORK | 2020: Editing in 4K: Minimum System Requirements / To start editing video, you’ll first need to ask yourself some questions about your projects. What type of footage will you be editing: R3D, CinemaDNG, ProRes, XAVC S, mp4? How complex are your projects: single shot, single camera, multi-camera, animation, VFX? What are your output formats? How long do you have to deliver your edits? Finally, are you editing online or offline? With all the constant fast technological upgrades on digital media, even the most experienced editor have hard time in understanding well the difference between Format and Film Rate or other technical definition of a set of standard characteristics regarding image capture on photographic film. Most will spontaneously assume that higher data values is wiser - without really question the veracity of this information and only looking at sales prophit blindly. Digital video is frequently compressed and coded for easier storage and transmission. Video compression systems change in broadcast TV, IPTV, telecommunication and many other video applications. A careful balance between the amount of compression, the visual quality loss and the coding speed is necessary to keep the total system cost down, while delivering a good user experience for various video applications. At the same time, power consumption optimizations are also essential to get the job done on inexpensive consumer platforms ( Digital Video Concepts, Methods, and Metrics: Quality, Compression, Performance, and Power Trade-off Analysis ) .Take in consideration that clients have different hardware and internet conditions ( this may change also acording to country and Global Web Protocol). So when dealing with high resolution video, If anyone has had media crashes (errors) I would recomend you understand your camera hardware first - including the speed of the SD card , when shooting high frame rates ( video 24 fps is ideal for a cinematic look and feel; capturing more motion for great slow motion and smoother video requires higher frame rates ); also, choose the frame rate acording to the subject your shooting ; Then analise the properties of the raw footage and all the hardware and applications that will interconect with that footage, to avoid incompatibilities and delays and misunderstanding between teamworks - . Understand Efficient video compression and improve quality of video in communication for computer endcoding applications and monitor refresh rate first , way before using Utilities tools like Dropbox ( FTP alternative file transfer ) - use it for larger media files to avoid compression problems on big files; Avoid Google drive for heavy raw footage - it will damage some data Properties when it automaticly compresses for you and it will be Extremely Slow -; Besides that Google Drive and Google Cloud Storage are not the same . There is a difference between Consumer vs. Professional Cloud Storage. Rather use it for already edited movies, to avoid crashes and media compression errors - They're not the most secure platforms to use as a server ( only for risky shifting alternatives), althought it's tempting cheapest - rather , consider other Fiable Technology, in order to deliver the best to safe-guard your privacy and media data because it's very easily hacked .
RESEARCH | 2020: Why Formats Matter
RESEARCH | 2020: The Importance of Codecs, and Containers
One of the most valuable lesson you learn whenever you start a project or product test, is to build a minimum quantity product, to ensure if it's viable ; understand potential customers to see if they identify with the problem the same way you do. Adopting an experimentation mindset, you'll be more willing to make mistakes and test a variety of different ideas and unlock more long-term value, as you'll give more ideas a chance to come to fruition. - The main question comes down to : "What is the right design path between "quality vs value" of a project ? Does a wrong action bring you back on the road afterwards, by any chance create new ramifications ? " The future represents an uncertain non-place that already inflicts a sense of choice or dynamic risk management efficiency handling - where the critical path evaluation represents the longest path through the project. Predicting the total duration of the project, the critical risk path identifies individual tasks that will require a greater level of monitoring and control to Avoid, mitigate, transfer or accept. Designed to eliminate or minimize the impact - The high-risk tasks— take the longest amount of time and can undermine project success. No one is immune to making mistakes – we are human, after all! You may have to reframe or try out several ways to put your learning into practice before you find one that successfully prevents you from repeating past errors. The reason why I usually Identify the skills, knowledge, resources, or tools that will keep you from repeating the error in the first place.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Things started accelerating when Apple and Google created an ecosystem of mobile developers. That’s when mobile apps appeared - Now Microsoft, Facebook and other tech companies claim a virtual reality universe is the future of the Internet. ex: Facebook the largest social network - suggests that individuals are creating a virtual-network consisting of both bonding and bridging social capital. Facebook has previously funded academic research into the social impact of AR wearables and solicited VR hardware proposals. Facebook’s announcement blog calls the metaverse the “next computing platform” and says that the company will be working with policymakers, researchers, and industry partners while building the successor of the mobile internet. ; in addition, important services like Airbnb, Instagram, uber, WhatsApp, Amazon , are also collecting data — . As a result, our relationship to our devices is changing. They are becoming avatars—extensions of us. Millennials want mobile products in their image:confident, connected, and open to change. Only users decide what works for them.
As a consumer and researcher I realised that In order to keep customers interested in a product or service, requires a good posting strategy / business plan, because It will mainly reflect on the product you're selling and call upon the target public - nurturing symptomatic cause and effect. Providing regular content might sell an image of a great marketing strategy as it helps your customers stay updated, plus it shows interest for ways to improve and provide even more value. However, as important as regular content provides a great marketing strategy and helps customers stay updated and product improved, it will be pretty useless if there is no consistent approach - whenever planners are more concerned with how it will look to visitors, than quality engagement can create a misleading culture. - "What does that mean?" - You might choose to simply paste massive posts in hope someone eventually will spot the work and follow compulsively , but that doesn't mean quantity justifies quality. Why? Because different platforms usually means different audience. Many of Instagram followers will not be LinkedIn users and vice versa. It's important to question everything that every business is mostly concerned with: - What product/services do people want to buy and why? ; - What kind of websites do people mostly visit? - What are the biggest hobbies nowadays and how can my product/service help? ; - What types of posts do people share the most? Hitting the Market with the Right Products at the Right Time : In practice, creating new product and service development is complex daunting task for engineers, design engineering managers, and those involved in other functions in the project. Product design and engineering processes and their management to sources of innovation, needs collaboration with suppliers, and knowledge providers (for example, creators ), and users. The managerial aspects get ample attention as well as the socioeconomic aspects in the context of product design and engineering. ...... The five key strategic stages: * Where are we now? - Strategic and marketing analysis * Where do we want to be? - Strategic direction and strategy formulation * How might we get there? - Strategic choice * Which way is best? - Strategic evaluation * How can we ensure arrival? - Strategic implementation and control ...... Photo Footnote From the Double.org site : Growth loops and the 6 pillars of growth / The More You Learn, the Cheaper It Becomes: www.interaction-design.org The disruption caused by the Covid-19 pandemic has accelerated business dependence on tecnology across the world - that become reliant on the cloud and have a deeper need to interact different business sectors , to work in a simbiotic networking way and not as a parasite ( isolated relationship ) . Exponential growth in digital infrastructure and connected devices will upgrade the need for 5G and even 6G tecnology, but the high cost of infrastructure does have a price : some countries will have more difficulty in hardware and software implementation , promoting inequalities between developed and underdeveloped countries / societies . As businesses adapt their culture to embrace the cloud, they must also adapt their approach to developing the talent. Each country will feel obligated in adapting to different cloud services to asure it's integrity / quality to create value. The web tends to be built on terrotories that delivers inequality that will eventually push product's and the market. Another shift is tying security into your cloud. Driven by advances in deep learning - started around 2015 and provides an edge for enterprises that wield it smartly with cybersecurity. The dramatically faster 5G connectivity could unlock an array of technologies, from autonomous cars that share traffic data to immersive virtual reality games. It's not just a phone technology: It could replace wired broadband in the home and allow for billions of other connected devices. Uses internet of things (IoT) sensors and technology to connect components across a city: sensors and connected devices allow cities to manage and monitor infrastructure, transport and financial systems, postal services and more. Traffic could run more efficiently, payment transactions made more secure and remote emergency surgery (think advanced drones replacing paramedics) could become common place. Smart cities usually occupy six dimensions: people, technology, infrastructure, management, economy and government. Chris Lane, a telecom analyst at Sanford C. Bernstein, thinks it will turn cities "smart" in much the same way electricity illuminated them. "Everything will be connected," he says,"and the central nervous system of these smart cities will be your 5G network." - this means better visibility, efficiency and performance, but also exposes the population to more risks as the entire system is connected. If one node is attacked, many more may be affected. Presenting Vulnerabilities , from protocol weaknesses to DDoS attacks -botnet, made to take down the targeted website or slow it down by flooding the network, server or application with fake traffic . It can slow down the victim's computer to the point where it becomes impossible to work on it. For example, 5G will increase the risk and potential damage of large-scale distributed denial-of-service (DDoS) attacks. This is when a hacker overloads a machine or network with traffic to render it useless. DDoS attacks are used to disable the online services of banks and e-commerce platforms, but the city’s critical infrastructure is a significant weak spot. 5G also presents some protocol weaknesses, for example in the authentication and key agreement (AKA) – the method of encrypting communication between devices and cellular networks, which has been previously utilized in 3G and 4G networks and is known to be vulnerable to international mobile subscriber identity (IMSI) catchers, interception of traffic and sensitive information. With both of these threats on the horizon, regular security practices such as supply chain security, access control, patch management, threat hunting and configuration management should be carried out to secure against 5G threats - we should ask ourselves : "How do we ensure 5G safety?" because there’s more to do to keep our cities and societies safe. Examples where AI are being used to improve the current approach to cybersecurity - based on Intrusion Detection Machine learning that helps detect and defend against intrusions without the need for any human intervention ; Malware Detection; Machine Learning Algorithms Detection of code vulnerabilities - such as Linear regression; Logistic regression ; Classification and Regression Trees;K-Nearest Neighbors –; K-Means Clustering; Neural Network - are being used to scan vast amounts of code and automate the process of identifying any potential vulnerabilities (before the hackers do); Fraudulent transactions and activity Detection can be flagged and prevented in real-time by detecting patterns and identifying deviations from the expected baseline behavior with machine learning . Such risks can become even more heightened in a “work from home” and remote working scenarios where the workforce is much more likely to be distributed and outside the organizational security perimeter - THE AI CANVAS - A Simple Tool to Start Making Decisions with the Help of AI , to guide organizations through sustainable AI deployment.Writing software is like making commercial products—quality is absolutely fundamental. So, " How can you guarantee software purpose ?"; "Why some digital innovations and experiences engage us deeply and spread widely, and why others do not ?" - Each individual company swears by its own specific game plan the ins and outs of process management and business scenarios analyses - but in practice , all goes back to understanding the consumer engagement between internal and external human factores. For many consumers, old habits are hard to break , which is probably why it’s so difficult to get consumers to try new products in the first place — not to mention make them loyal buyers. Clients are no longer looking for a social status brand, they tend to offer price for the experience.
In addition, the challenge between growth and agility, compliance and transparency pressures increasingly bring data quality issues to the fore — while the infrastructure can’t keep up, Marketing is facing the same problem. Too much data with not enough structure in place to manage the data and not enough meaningful application. A enterprise will always need to tackle information in effective ways, just as most of them still need industrial techniques to make their products cheaply and efficiently. So what are the consequences of a business world with “too much information”?
Since the breakthrough advances in science, technology, and philosophy—including cybernetics, bio-engineering, nanotechnology, machine intelligence, synthetic biology or transhumanism — We have created Dystopian Cityscapes - vulnerabilities , not only the conditions that allow animal viruses to cross over into human populations, but also mind game-changes to everything you have ever known about spiritual warfare and disrupted ecossystems - From the tactile mirror to the virtual body, It's scalability and territoriality origins are being monopolized on money and measured by uneven interests of power - through the appropriation of privacy and vigilance - mapping territories from packages to people : the medium cloud, social movements on mobile, Big Data and IoT are Transforming Physical Security in the Digital Age , explore the major technological forces currently driving to digital disruption. Citizens are not clients or customers , rather have human rights (democratic citizenship) . Besides the Economic / financial market, Social, Health, Environmental framing, The COVID-19 crisis is a chance to do capitalism differently .
RESEARCH | 2020: Good decisions on bad data are just bad decisions that the business hasn’t made yet
Everyone is talking about value whether you recognize or not, and no one seems to understand it - especially in a world of constant change due to economic/ cultural / social transitions. The Internet is dirrectioning to a new Architecture level that aims to put users first, while it begins to consider a policy approach whichever fosters the necessary innovation and investment, that allows flexibility or experimentation. Even so high-tech architectural transition takes time to scale up globally, I can't be sure if value is subjective according to the business model that no one wants to be known and everyone is repeating - selling concepts and actually believing in it blindly. I see Value as a consequence of continuity flow, not a short-scale projects plan state of art that requires close cooperation between the platform owner and key developers. Priorities change depending on a customer’s context. Taking this context into account before you think of a value proposition for that customer is crucial. .Value propositions and business models are always designed in a context. Whereas the idea of the internet as a democratic source of information that brought people together, the web algorithms has filtered what someone searches, exposing along their interests, creating an echo chamber bubble of one’s own held opinions and increase aggressive sense of superiority within bully attitudes that trigger misleading meanings, reproducing cancel culture with no space to listen - therefore reducing communicative competency to engage in human dialogue in real life. Besides freedom of speech restrictions and constrains within privacy and security issues, it obviously effects in building human connections : forecasting a false sense of effective communication ; reflecting direct health implications, misconception perception and discursive gap - that in turn , works against isolation, antagonism and stress. So being aware of this inconsistency can bring to light the nature of problems / conflict . Whenever someone is hiding the bad parts with quick solutions, I presume it can only cause a bigger structured problem - essential due to the fact that a responsible worker likes to be part of a project that brings prosper competence - to be taken serious. Oppositely, the qualms Quality and vitality of Value are not the same. There's no doubt that this will determine and reveal the real life cycle of the project - Above all , put any doubtful percentage of risk and strategy apart from the forward trust towards other active competitors.Hacknowledgments : “I invented nothing new. I simply assembled the discoveries of other men behind whom were centuries of work. Had I worked fifty or ten or even five years before, I would have failed. So it is with every new thing. Progress happens when all the factors that make for it are ready, and then it is inevitable. To teach that a comparatively few men are responsible for the greatest forward steps of mankind is the worst sort of nonsense.” ― Henry Ford
Programming Languages
RESEARCH COLLECTION | 2018 - Best Programming Language for Machine Learning
TOOLKIT : Internet of Things (IoT) Cloud Based Photon Particle WiFi Microcontroller.
BOOKS & ARTICLES | 2020 : SITEPOINT
RESEARCH | 2020: JavaScript community members as they share their new found visual coding skills.
Future Perspectives
SAFe principles:
#1 – Take an economic view
#2 – Apply systems thinking
>#3 – Assume variability; preserve options
#4 – Build incrementally with fast, integrated learning cycles
#5 – Base milestones on objective evaluation of working systems
#6 – Visualize and limit WIP, reduce batch sizes, and manage queue lengths
#7 – Apply cadence, synchronize with cross-domain planning
#8 – Unlock the intrinsic motivation of knowledge worker
#9 – Decentralize decision-making
#10– – Organize around value
ITCYBER_PENTEST__What is a white hat hacker? | 2022 DevOps | 2022 ITExams | 2021 || METASPLOIT - How to Use Metasploit | Meterpreter | Reverse shell | Metasploit Tutorial || || A Complete Penetration Testing Guide With Sample Test Cases || || Most Important Python Tools for Ethical Hackers & Penetration Testers 2022 || || How to run Flask App on Google Colab |||| The Open Web Application Security Project (OWASP) - Application security tools , incidence rate and standards : An Open-Space where all materials are available for free and easily accessible on your website, making it possible for anyone to improve the security of their own web applications. The materials OWASP offer include documentation, tools, videos, and forums. Perhaps his best known project is the OWASP Top 10 || Scan and extract text from an image using Python libraries || 2022 || The fastest way to learn OpenCV, Object Detection, and Deep Learning. || 2022 ||
WEB TOOLS - Is this website Safe | 2022: How to Check Website Safety to Avoid Cyber Threats Online
ARTICLE | 2020 : Apple is rumored have a secret team of hundreds of employees working on virtual and augmented reality projects. FOOTNOTE: STREAMING SHAKEOUT REPORT | 2020: A massive media shakeout is on the horizon — The war for streaming video has officially begun. ARTICLE | 2020 : Apple TV+ to offer augmented reality content as bonus. Some new iPhones will include Lidar 3-D scanners used in the latest iPad Pro, making AR apps quicker to load and giving them a better sense of their environment. ARTICLE | 2021: Meet ALGO the alternative non-commercial VPN and Why Would I Need One ? - Algo automatically deploys an on-demand VPN service in the cloud that is not shared with other users for FREE, relies on only modern protocols and ciphers. The ‘VP of all Networks’ is strong, secure , tidy and includes only the minimal software you need. virtual private network, is a secure tunnel between your device and the internet for torrenting or bypassing geographic restrictions to watch content in a different country. VPNs protect you from online snooping, interference, and censorship. It makes web browsing more secure and stymies any malicious actors who might be on the same local Wi-Fi network. Virtual Private Network, allows you to create a secure connection to another network over the Internet. VPNs can be used to access region-restricted websites, shield your browsing activity from prying eyes on public Wi-Fi, and more. Companies all over the world sell VPN services to secure your online activity, but can you really trust a VPN provider? If you want, you can create your own virtual private network with the open-source ARTICLE | VPNs aren't perfect, here are some alternatives VIRTUAL MACHINE RESOURCES | 2021: A VM simulates a CPU along with a few other hardware components, allowing it to perform arithmetic, read and write to memory, and interact with I/O devices, just like a physical computer. Requires a moderately powerful laptop that supports hardware virtualization - SharePoint solutions, to work on multiple programming languages, libraries and operating system features to support multiple projects. Most importantly, it can understand a machine language which you can use to program it. PENTEST TOOLS | 2022HARDWARE RESOURCES | 2021 :Apple reportedly working on a 32-core processor for high-end Macs
Resources on WebGPU vs Pixel Streaming | 2021 : " Two completely new technologies to develop modern graphics-focused software are on the rise. " WebGPU is the successor to WebGL and offers remarkable performance improvements. However, pixel streaming (or render streaming or remote rendering) makes it possible to stream the audio-visual output of a hosted cloud software to the client. The client does not need expensive hardware — only a good internet connection - so it goes in a completely different direction and is actively used by the gaming industry." STREAMING | 2020: Streaming Wars - A Tale Of Creative Destruction . In the battles over consumer attention and subscription dollars, content, and talent, each new-to-market service has its own strengths and weaknesses.RESEARCH | UPDATE 2020 : The 10 most innovative virtual and augmented reality companies of 2020
ARTICLE | 2020: How does photogrammetry work? 3D content creation made easy.RESEARCH | 2020 : UPDATE of Feb 21, 2020: Is virtual reality the next channel for digital marketers?
ARTICLE | 2020: Is Augmented Reality the future of contact-free shopping?LEARN COLLECTION | 2020 - The 7 Steps of Machine Learning / TensorFlow Playground
Tools
API TOOLS | 2021: Google Maps JavaScript API Tutorial - BEST EXPLAINED WITH ATOM PLATFORM -
|| METASPLOIT - How to Use Metasploit | Meterpreter | Reverse shell | Metasploit Tutorial |||| A Complete Penetration Testing Guide With Sample Test Cases ||
|| Most Important Python Tools for Ethical Hackers & Penetration Testers 2022 ||
|| How to run Flask App on Google Colab ||
Scan and extract text from an image using Python libraries || 2022 || The fastest way to learn OpenCV, Object Detection, and Deep Learning. || 2022 ||WEB TOOLS - Is this website Safe | 2022: How to Check Website Safety to Avoid Cyber Threats Online
COLOR TOOLS | 2021: Color Palettes for Designers and Artists
IOS APP TOOLS | 2021: iOS Course Resources List
RESEARCH | 2020: What Is the React.js Framework? When and Why Should I Use React.js in My Project?
TOOLKIT | 2020: Face detection using HTML5, javascript, webrtc, websockets, Jetty and OpenCV
TOOLKIT | 2020: Object Detection with HTML5 getUserMedia
SOLIDIFY RESOURCES 2020 | How do you get Ethereum account’s address, create a new Etherem address and how to send /receive cryptocurrency? TOOLKIT : Tools for Working with Excel and PythonTOOLKIT | 2020: Excel Automation Tools (Best of List) TOOLKIT 1
TOOLKIT 2 - AR / WebRTC, GWT & in browser computation -
Software Architectural
LEARN | 2020: Martin Fowler Blog - Software Development, primarily for Enterprise Applications :
RESEARCH | 2020: WebSockets - A Conceptual Deep-Dive RESEARCH | 2020: What is Serverless Computing BOOKS | 2020: Web Application Vulnerabilities: Detect, Exploit, Prevent By Steven Palmer RESEARCH | 2020: What Is Internal Audit’s Role in Cyber Security?RESEARCH | 2020: Embedding a Tableau Visualisation with the JavaScript API
RESOURCES FOR TRAINING | 2020: Paper with codeTeamwork Skills & Project Management:
This is why I believe it's a bad choice to underestimate a worker point of view , while it could bring the project onto another level of success and understanding, on how it could fill the gaps - to make it better resistant throughout time . Time is an element of success in a world where technology knowledge and artificial intelligence accelerates everything connected in between and around. Everyone should have the right and freedom to speak without putting their position at risk if their opinion helps for the project why not listen and try to understand? Nevertheless, it's important for every team element to understand the life circle of the business plan so it doesn't contradict it's true values of the project . One must question constantly: "Is it a short, medium or long plan" ;" What are the true intentions behind concepts? ". I believe Team members should be informed of the true values so that their work adjust to the company values - equilibrium of knowledge -. If everyone works towards that , it's an easier process and the risk percentage will decrease . - Ethic and professional attitudes count and turn it into a win-win game. It identifies and gives true perspective to projects. RESEARCH | 2020:: Communication in organizations is equivalent to the neural network in the human body. If there is a misfire, the organism becomes inefficient or even dysfunctional.
OPINION | 2020: Where Are We in 'The Cycle'? - viewed through a business-cycle -
######
" High personal accountability:1. Drive for Results. Sometimes in organizations, it is really hard to focus. When we are sending multiple messages about what is critical and what others are accountable for, accountability dissipates. If you want people to be responsible, then you must clearly define the results that you want them to deliver, and let them have a fair amount of control on how they deliver those results.
2. Honesty and Integrity. When your boss asks in a company meeting, “how’s that project coming?” do you honestly reply, “we are really behind” or “pretty good?” Those who are accountable have the courage to tell the truth. This courage is often reinforced because people see their managers being open and direct with them.
3. Trust. We did some research on a set of leaders who were not trusted and found their employees had the following issues: I am not confident that my efforts will be rewarded PROMOTED I suspect the leader may take advantage of me I constantly question the leader’s motives I am sure they will take credit for my accomplishments These are not factors that will build accountability. In contrast, the three pillars that build trust are positive relationships, knowledge, and consistency of leaders.
4. Clear Vision and Direction. There is an old Chinese proverb that explains this issue well: “The hunter that chases two rabbits catches neither one.” In organizations, people are often chasing multiple rabbits and they don’t catch any of them. How can you expect people to be accountable if they aren’t absolutely clear about the organization’s vision for where they’re going and what needs to be accomplished? Clearly, you can’t.
5. Problem Solving and Technical Expertise. It is impossible to feel accountable when a person is confused and doesn’t know how things work. Teach your people the skills and give them the training they need, and make absolutely sure they know how to do the job you expect.
6. Communication. When a leader can effectively communicate, others can understand what they are accountable for. This requires being able to tell, ask, and listen to others.
7. Ability to Change. We found that people who are really good at creating change in an organization had employees who are operating at higher levels of accountability. Leaders who are good at instituting change are effective at the following behaviors: accepting feedback, taking on challenges, innovating, spreading optimism, showing concern, and setting clear goals.
8. Collaboration and Resolving Conflict. Collaboration is a difficult skill to achieve in an organization. Are you cooperating or competing with others in your group? Peter Blow at Columbia University did a series of studies on this issue that showed that teams who collaborate and are cooperative are far more successful than those who compete. Cooperation breeds accountability. On the long personal and organizational “to do” list, accountability should be at the top of the list. If you see a fatal flaw in yourself or your current leaders on any of these eight points, you should address it immediately. In fact, the single greatest way to leverage accountability is to pick a few of these key behaviors to work on yourself. Why? The research is clear on this issue: great accountability in the organization begins with you."
🔗 ProperTree: https://github.com/corpnewt/ProperTree
🔗 MountEFI: https://github.com/corpnewt/MountEFI
🔗 OC-Gen-X: https://github.com/Pavo-IM/OC-Gen-X/releases
💻 Command to make bootable installer for macOS Big Sur:
sudo /Applications/Install\ macOS\ Big\ Sur.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume
🔗 Find the commands for other versions of macOS -----> HERE <----
🔗 Links for SSDTs (make sure to select the correct processor type): https://dortania.github.io/Getting-Started-With-ACPI/ssdt-methods/ssdt-prebuilt.html#intel-desktop-ssdts💻 Device Properties for Coffee Lake: PciRoot(0x0)/Pci(0x2,0x0) AAPL,ig-platform-id 07009B3E framebuffer-patch-enable 01000000 framebuffer-stolenmem 00003001 💻 Boot-args: -v keepsyms=1 debug=0x100 alcid=1 prev-lang:kbd en-US:0
The RedMonk Programming Language Rankings | June 2020
BOOKS | 2020: Download Programming Books
Update FREE LABS TO TEST YOUR REDTEAM/BLUETEAM and CTF SKILLS :
Share with your network and friends.· Attack-Defense - https://attackdefense.com
· Alert to win - https://alf.nu/alert1· Bancocn - https://bancocn.com
· CTF Komodo Security - https://ctf.komodosec.com· CryptoHack - https://cryptohack.org/
· CMD Challenge - https://cmdchallenge.com· Cyberdefenders - https://lnkd.in/dVcmjEw8
· Ctftime- https://ctftime.org· Dfirmadness - https://lnkd.in/dNkzQvXH
· Explotation Education - https://exploit.education· Google CTF - https://lnkd.in/e46drbz8
· HackTheBox - https://www.hackthebox.com· Hackthis - https://www.hackthis.co.uk
· Hacksplaining - https://lnkd.in/eAB5CSTA· Hacker101 - https://ctf.hacker101.com
· Hacker Security - https://lnkd.in/ex7R-C-e· Hacking-Lab - https://hacking-lab.com/
· HSTRIKE - https://hstrike.com· ImmersiveLabs - https://immersivelabs.com
· LetsDefend- https://letsdefend.io/· NewbieContest - https://lnkd.in/ewBk6fU5
· OverTheWire - http://overthewire.org· Practical Pentest Labs - https://lnkd.in/esq9Yuv5
· Pentestlab - https://pentesterlab.com· Hackaflag BR - https://hackaflag.com.br/
· Penetration Testing Practice Labs - https://lnkd.in/e6wVANYd· PentestIT LAB - https://lab.pentestit.ru
· PicoCTF - https://picoctf.com· PWNABLE - https://lnkd.in/eMEwBJzn
· Pwn college- https://dojo.pwn.college· Portswigger Labs - https://lnkd.in/dF8nFyEN
· Root-Me - https://www.root-me.org· Root in Jail - http://rootinjail.com
· Rangeforce- https://www.rangeforce.com· SANS Challenger - https://lnkd.in/e5TAMawK
· SmashTheStack - https://lnkd.in/eVn9rP9p· SnapLabs - https://lnkd.in/d-yGATs7
· The Cryptopals Crypto Challenges - https://cryptopals.com· Try Hack Me - https://tryhackme.com
· Vulnhub - https://www.vulnhub.com · Vulnmachines- https://vulnmachines.com· W3Challs - https://w3challs.com
· WeChall - http://www.wechall.net· Zenk-Security - https://lnkd.in/ewJ5rNx2
#ctf #pentest #redteam #blueteam #hacking #informationsecurity #cybersecurity
"Where to begin? Why am I doing this? What's my end goal?" - Begin - Learn - Improve - Persist - . People who like to learn, share, and challenge themselves normally seek knowledge even through bad experiences and errors, and embrace failures as an opportunity to { [re]structure and [re]think } . Your attention determines the experiences you have, and the experiences you have determine the life you live. When the Nobel physicist Richard Feynman wrote on his blackboard, he was reminding himself and his students that unless they could take apart a concept or theory and understand each individual step, they did not truly understand it. Every step to get to the solution is valid, even errors. Through failure and deconstruction, one can understand how a concept behaves and discover a solid framework and a set of techniques to accomplish it. The process reveals a new path and enables the improvement of understanding. Many errors have led creators to new solutions! The way one thinks of a problem, the language chosen to talk about it, and the benefits of sharing models with users all determine the object life-cycle management and logical and physical application structuring. Most of the time, we are incapable of comprehending certain knowledge until we have experienced it firsthand. To avoid blindly following behind, always question. Knowing what one needs to know in order to make honest decisions for the good of all, rather than decisions for the good of a few, is crucial. Redifining value , is not just about profit maximization issues - both tangible and intangible investments -, but sustainable growth ( the measure of Value Creation vs. Revenue Extraction); # INSIGHTS | 2020 :
By getting the best out of quantity and quality of the available professionals , from the company itself and external sources – internal audit can generate more value, offer new solutions and ideas, provide greater security and modernize its traditional rules. Data is valuable, but it may not live up to what you expect – lost, polluted, duplicated or in breach of data protection laws. Often these issues go unnoticed, ruining months of data and no way to retroactively clean it. Data is an important asset for companies, which must make efforts to ensure their quality, allowing them to acquire insights to understand consumers and better tailor their products and services. || 2022
Lean Budgeting:
# GITERS |*UPDATE - where people build software
In the early 1980s, home computing was booming around the world as millions of people bought their very first machine from Commodore, Sinclair, Oric, Acorn, or Atari. The curiosity about merging digital arts with technology has been a part of my life since then.
I refocused my interest in programming - integrating science and arts - because I saw how traditional creative media companies' business strategies have been struggling to stay competitive in today's media. They have been unable to invest in new models due to their resistance to change and incapable of talent retention due to the lack of income investment requirements.
The consequences are more than visible now. Consumers have been canceling their pay-TV subscriptions in favor of internet-delivered alternatives since 2011. I recommend you to read the following book: The Netfix Effect"
<----click to see video --- New forms of production, distribution, and exhibition imply different ways of thinking, doing, and experimenting.
|| STORYset | Awesome free customizable illustrations for your next project ||
|| REBNDERMAN Tutorials | RenderMan Fundamentals || || Linux Open source video editor. Free and easy to use for any purpose, forever || || Top 10 Best Cyber Attack Simulation Tools to Improve Your Organization Security – 2022|| || Penetration Test Reports || # INSIGHTS ON LIVE CODING | Feb 2021 *UPDATE : Programming languages used to simulate spatial computing - visual surround sound HRTF , using software/hardware tools
RESOURCES 2021 | AI and the future of the mind: Is it possible that we'll merge with AI?
FOOTNOTE: The PDCA model is a simple way to initiate change within an organization or start a project. Similar to OODA, PDCA is an acronym consisting of four steps: plan, do, check, and act. Although the PDCA method is easier to comprehend, some critics argue that it is too linear and may not align with the rapidly-evolving realities of the corporate world. The process involves creating a plan for the project, putting it into action, assessing the results, and making adjustments to rectify errors. Ideally, one would utilize the insights gained to create a new plan, restarting the cycle from the beginning. The PDCA approach is a deliberate and cautious method, differing from the OODA loop, and may not be suitable for situations where quick reactions are necessary.