This post writtern by Josh Clemm
On a cold evening in Paris in 2008, Travis Kalanick and Garrett Camp couldn't get a cab. That's when the idea for Uber was born. How great would it be if you could "push a button and get a ride?"
Fast forward to today where Uber is the largest mobility platform in the world. It operates in over 70 countries and 10,500 cities. Uber Eats is the largest food delivery platform ex-China in the world. It operates across 45 countries. We connect millions of driver-partners and merchants with over 130 million customers. We offer dozens of services to go anywhere or get anything. We handle billions of database transactions and millions of concurrent users across dozens of apps and thousands of backend services.
Back in 2009, Uber hired contractors to build the first version of Uber and launched it to friends in San Francisco soon after. The team used the classic LAMP stack to build the first version of Uber and the code was written in Spanish.
Original LAMP stack proved out the use case, but couldn’t scale.
The scaling challenges started as more people wanted to use Uber. There were often major concurrency issues, where we’d dispatch two cars to one person or one driver would be matched to two different riders. (Learn more about Uber’s earliest architecture from founding engineer Curtis Chambers).
But the product caught on. It was time to build the tech foundations from scratch.
Circa 2011
To architect a better and more scalable solution, we needed to address those concurrency problems. Additionally, we needed a system that can process tons of real-time data. Not only are there requests from riders, but Uber needs to track driver real-time locations in order to match riders as efficiently as possible. Finally, the product was still early and would require a lot of testing and iteration. We needed a solution to solve all these scenarios.
Uber adopted Node.js for their real-time needs. And ended up being one of the first major adopters of Node.js in production. Node.js was ideal for a few reasons. First, Node.js handles requests asynchronously (using a non-blocking, single-threaded event loop) and so can process significant amounts of data quickly. Second, Node.js runs on the V8 JavaScript engine, so not only is it performant, but it’s excellent for quick iteration.
Uber then created a second service built in Python to handle business logic functions like authentication, promotions, and fare calculation.
The resulting architecture was two services. One built in Node.js ("dispatch") connected to MongoDB (later Redis) and the other built in Python ("API") connected to PostgreSQL.
Uber’s two monolith architecture allowed the engineering org to begin to scale
And to improve the resiliency of Uber’s core dispatching flow, a layer between dispatch and API known as "ON" or Object Node was built to withstand any disruptions within the API service. (Learn more about Uber’s earliest efforts to maintain service reliability in this video Designing for Failure: Scaling Uber by Breaking Everything).
This architecture started to resemble a service oriented architecture. Service oriented architectures can be very powerful. As you carve out services to handle more dedicated functionality, it has a side benefit of allowing easier separation of engineers into dedicated teams. Which then allows for more rapid team scaling.
But as the team and number of features grew, the API service was getting bigger and bigger. More and more features were conflicting with one another. Engineering productivity was slowing down. There were huge risks continuously deploying the codebase.
It was time to split out API into proper services.
Circa 2013
To prepare for our next phase of growth, Uber decided to adopt a microservice architecture. This design pattern enforces the development of small services dedicated to specific, well-encapsulated domain areas (e.g. rider billing, driver payouts, fraud detection, analytics, city management). Each service can be written in its own language or framework, and can have its own database or lack thereof. However, many backend services utilized Python and many started to adopt Tornado to provide asynchronous response functionality. By 2014, we had roughly 100 services.
Uber’s API monolith to microservice migration
While microservices can solve many problems, it also introduces significant operational complexity. You must only adopt microservices after understanding the tradeoffs and potentially build or leverage tools to counteract those tradeoffs. And if you don’t consider the operational issues, you will simply create a distributed monolith.
Here’s some examples of the issues microservices create and what Uber did to address.
To ensure all services use standardized service frameworks, we developed Clay. This was a Python wrapper on Flask to build restful backend services. It gave us consistent monitoring, logging, HTTP requests, consistent deployments, etc.
To discover and talk to other services and provide service resilience (fault tolerance, rate limiting, circuit breaking), Uber built TChannel over Hyperbahn. TChannel as our bi-directional RPC protocol was built in-house mainly to gain better performance and forwarding for our Node and Python services, among other benefits.
To ensure well-defined RPC interfaces and stronger contracts across services, Uber used Apache Thrift.
To prevent cross-service capability issues, we use Flipr to feature flag code changes, control rollouts, and many other config-based use cases.
To improve the observability of all service metrics, we built M3. M3 allows any engineer easy ways to observe the state of their service both offline or through Grafana dashboards. We also leverage Nagios for alerting at scale.
For distributed tracing across services, Uber developed Merckx. This pulled data from a stream of instrumentation via Kafka. But as each service started to introduce asynchronous patterns, we needed to evolve our solution. We were inspired by Zipkin and ultimately developed Jaeger, which we still use today.
Over time, we’ve migrated to newer solutions like gRPC and Protobuf for interfaces. And many of our services utilize Golang and Java.
Circa 2014
While Uber was creating many new backend services, we continued to use one single PostgreSQL database.
The single PostgreSQL DB became a bottleneck
We were hitting some significant issues. First, the performance, scalability, and availability of this DB was struggling. There was only so much memory and CPUs you could throw at it. Second, it was getting very hard for engineers to be productive. Adding new rows, tables, or indices for new features became problematic.
And the problem was getting existential. By early 2014, Uber was 6 months away from Halloween night - one of the biggest nights of the year. We needed a more scalable solution and needed it fast.
When we looked into the data mix of this DB, the majority of storage was related to our trips, which was also growing the fastest.
The mix of data stored in our single PostgreSQL DB in early 2014
We use trip data in order to improve services like Uber Pool, provide rider and driver support, prevent fraud, and develop and test features like suggested pick-ups. So we embarked on developing Schemaless, our new trip data store. Schemaless is an append-only sparse three dimensional persistent hash map, similar to Google’s Bigtable, and built on top of MySQL. This model lended itself naturally to horizontal scaling by partitioning the rows across multiple shards and supported our rapid development culture.
And we successfully migrated all our services that access trip information in time to avert the Halloween peak traffic disaster. (Learn more with this video from lead engineer Rene Schmidt about our creation of and migration to Schemaless).
The Schemaless migration operational room - a common look into migrations at scale
While we used Schemaless for our trip data store, we started to use Cassandra as a replacement for our other data needs, including the database that we use for marketplace matching and dispatching.
Circa 2014
Among Uber’s original two monoliths, we discussed the evolution of API into hundreds of microservices. But dispatch similarly was doing too much. Not only did it handle matching logic, it was the proxy that routed all other traffic to other microservices within Uber. So we embarked on an exercise to split up dispatch into two areas of cleaner separation.
Splitting the monolithic dispatch service into a real-time API gateway and an actual dispatch service
Extracting Uber’s Mobile Gateway from Dispatch
To better handle all the real-time requests from our mobile apps, we created a new API gateway layer named RTAPI ("Real-Time API"). And we continued to use Node.js for it. The service was a single repository that was broken up into multiple specialized deployment groups to support our growing businesses.
RTAPI provided a powerful new real-time layer that maintain high development velocity
The gateway provided a very flexible development space for writing new code and had access to the hundreds of services within the company. For instance, the first generation of Uber Eats was completely developed within the gateway. As the team's product matured, pieces were moved out of the gateway and into proper services.
Rewriting Dispatch for Uber’s Growing Size
The original dispatch service was designed for more simplistic transportation (one driver-partner and one rider). There were deep assumptions that Uber only needed to move people and not food or packages. Its state of available driver-partners was sharded by city. And some cities were seeing massive growth of the product.
So, dispatch was rewritten into a series of services. The new dispatch system needed to understand much more about the types of vehicles and rider needs.
It took on advanced matching optimizations, essentially to solve a version of the traveling salesman problem. Not only did it look at the ETAs of currently available driver-partners, but needed to understand which drivers could be available in the near future. So we had to build a geospatial index to capture this information. We used Google’s S2 library to segment cities into areas of consideration and used the S2 cell ID as the sharding key. (We’ve since updated to and open-sourced H3)
Overview of dispatch stack
Since these services were still running on Node.js and were stateful, we needed a way to scale as the business grew. So we developed Ringpop, a gossip-protocol based approach to share geospatial and supply positioning for efficient matching.
Learn more about the history of our dispatch stack here or watch this video on Scaling Uber’s Real-time Market Platform.
Circa 2016 to present
The flagship Uber product could only have existed due to the new mobile paradigm created by the iPhone and Android OS launches in 2007. These modern smartphones contained key capabilities like location-tracking, seamless mapping, payment experiences, on-device sensors, feature-rich user experiences, and so much more.
So Uber’s mobile apps were always a critical part of our scaling story.
The Uber rider app was critical in defining a scalable mobile architecture
As Uber scaled across the globe, there was a need for an ever-growing list of features. Many of these were specific to certain countries like localized payment types, different car product types, detailed airport information, and even some newer bets in the app like Uber Eats and Uber for Business.
The mobile app’s repositories slowly hit similar bottlenecks to a backend monolith. Many features and many engineers, all trying to work across a single releasable code base.
This led Uber to develop the RIB architecture for mobile, starting with the rewrite of the main Uber app.
RIB stands for Router, Interactor, Builder
Like the benefits of microservices, RIBs have clear separation of responsibilities. And since each RIB serves a single responsibility, it was easy to separate them and their dependencies into core and optional code. By demanding more stringent review for core code, we were more confident in the availability of our core flows. And this allows simple feature flagging to ensure the app continues to run reliably.
And like microservices, RIBs can be owned by different teams and engineers. This allowed our mobile codebases to easily scale to hundreds of engineers.
Today, all our apps have adopted RIBs or are migrating towards it. This includes our main Driver app, the Uber Eats apps, and Uber Freight.
Circa 2017
Uber had been experimenting with a number of “Uber for X on-demand” concepts since 2014. And all early signs pointed towards food. So in late 2015 Uber Eats launched in Toronto. And followed a similarly fast growth trajectory just like UberX.
Uber Eats business growth compared with Uber rides (Uber Q3 2020 Earnings)
To enable this rapid growth, Uber Eats leveraged as much of the existing Uber tech stack as possible, while creating new services and APIs that were unique to food delivery (e.g. e-commerce capabilities like carts, menus, search, browsing).
A simplified view into early Uber Eats architecture and how it leveraged systems built for original Uber
The operations team that needed to tune their cities’ marketplace often got creative and did things that didn’t scale (until the appropriate tech was built).
Uber Eats Canada running scripts to help manage which stores were active and tuning the delivery radius of each based on available driver partners
Early Uber Eats was "simple" in that it supported a three-way marketplace of one consumer, one restaurant, and one driver-partner. Uber Eats today (130+ million users, dozens of countries) supports a variety of ordering modes and capabilities and can support 0-N consumers (eg. guest checkout, group ordering), N merchants (eg. multi-restaurant ordering), and 0-N driver partners (eg. large orders, restaurants which supply their own delivery fleet).
The history of how Uber Eats evolved probably deserves its own scaling story and I may one day get to it.
But for now, to learn more from the earliest days, I highly recommend listening to Uber Eats founder Jason Droege’s recount of "Building Uber Eats".
Circa 2018
No scaling story is complete without understanding the context and culture of the company. As Uber continued to expand city by city, local operations folks were hired to ensure their city launch would go successfully. They had tools to ensure their local marketplace would remain healthy and the flywheel would grow. As a result, Uber had a very distributed and decentralized culture. And that helped contribute to Uber’s success in getting to 600 cities by 2018.
That culture of decentralization continued within engineering, where one of our earliest cultural values was "Let Builders Build". This resulted in rapid engineering development that complemented Uber’s success growing across the globe.
But after many years, it resulted in the proliferation of microservices (thousands by 2018), thousands of code repositories, multiple product solutions solving very similar problems, and multiple solutions to common engineering problems. For example, there were different messaging queue solutions, varying database options, communication protocols, and even many choices for programming languages..
"You've got five or six systems that do incentives that are 75 percent similar to one another" - Former Uber CTO Thuan Pham
Developer productivity was hurting.
Engineering leadership recognized it was time for more standardization and consistency and formed Project Ark. Project Ark sought to address many aspects of engineering culture that contributes to scaling:
Engineer productivity,
Engineer alignment across teams,
Duplication,
Unmaintained critical systems, and
Knowledge access and documentation.
As a result, we elevated Java and Go as official backend languages to gain type-safety and better performance. And deprecated the use of Python and Javascript for backend services. We embarked on reducing code repos from 12,000 down to just our main languages (Java, Go, iOS, Android, and web). We defined more standardized architectural layers where client, presentation, product, and business logic would have clear homes. We introduced abstractions where we could group a number of services together (service "domains"). And continued to standardize on a series of service libraries to handle tracing, logging, networking protocols, resiliency patterns, and more.
Circa 2020
By 2019, Uber had many business lines with numerous new applications (Uber Eats, Freight, ATG, Elevate, and more). Within each line of business, the teams managed their backend systems and their app. We needed the systems to be vertically independent for fast product development.
And our current mobile gateway was showing its age. RTAPI had been built years ago and continued to use Node.js and Javascript, a deprecated language. We were also eager to make use of Uber’s newly defined architectural layers as the ad hoc code added to RTAPI over the years was getting messy with view generation and business logic.
So we built a new Edge Gateway to start standardizing on the following layers:
Edge Layer: API lifecycle management layer. No extraneous logic can be added keeping it clean.
Presentation Layer: microservices that build view generation and aggregation of data from many downstream services.
Product Layer: microservices that provide functional and reusable APIs that describe their product. Can be reused by other teams to compose and build new product experiences.
Domain Layer: microservices that are the leaf node that provides a single refined functionality for a product team.
This evolution set us up well to continue building new products with velocity, yet with the necessary structure to align our 1000s of engineers.
Circa 2021
Throughout the years, Uber has created a world-class platform for matching riders and driver-partners. So our dispatch and fulfillment tech stack is a critical part of Uber’s scaling story.
By 2021, Uber sought to power more and more delivery and mobility use cases. The fulfillment stack was showing its age and couldn’t easily support all these new scenarios. For example, we needed to support reservation flows where a driver is confirmed upfront, batching flows with multiple trips offered simultaneously, virtual queue mechanisms at Airports, the three-sided marketplace for Uber Eats, and delivering packages through Uber Direct.
Some example services the Fulfillment Platform needed to support
So we made a bold bet and embarked on a journey to rewrite the Fulfillment Platform from the ground up.
To satisfy the requirements of transactional consistency, horizontal scalability, and low operational overhead, we decided to leverage a NewSQL architecture. And opted to use Google Cloud Spanner as the primary storage engine.
As lead Uber engineer Ankit Srivastava puts it "as we scale and expand our global footprint, Spanner's scalability & low operational cost is invaluable. Prior to integrating Spanner, our data management framework demanded a lot of oversight and operational effort, escalating both complexity and expenditure."
The Uber-GCP network infrastructure
Of course, our scaling story is never this simple. There's a countless number of things we've done over the years across all engineering and operations teams, including some of these larger initiatives.
Many of our most critical systems have their own rich history and evolution to address scale over the years. This includes our API gateway, fulfillment stack, money stack, real-time data intelligence platform, geospatial data platform (where we open-sourced H3), and building machine learning at scale through Michelangelo.
We’ve introduced various layers of Redis caches. We’ve enabled powerful new frameworks to aid in scalable and reliable systems like Cadence (for writing fault-tolerant, long-running workflows).
We’ve built and leveraged data infrastructure that enables long term growth, like how we leveraged Presto or scaled Spark. Notably, we built Apache Hudi to power business critical data pipelines at low latency and high efficiency.
And finally, we continue to improve the performance of our servers with optimized hardware, advanced memory and system tuning, and utilizing newer runtimes.
Being a global company, Uber operated out of multiple on-prem data centers from the earliest days. But that introduced a number of challenges.
First, our server fleet had grown rapidly (over 250,000 servers) and the tooling and teams were always trying to keep up. Next, we have a large geographical footprint and need to regularly expand into more data centers and availability zones. Finally, with only on-prem machines, we were constantly needing to tune the size of our fleet.
We spent the last few years working towards making over 4000 of our stateless microservices portable. And to ensure our stack would work equally well across cloud and on-prem environments, we embarked on Project Crane to solve. This effort set Uber up well for the future. To learn more, watch lead Uber engineer Kurtis Nusbaum’s talk on how Crane solves our scaling problems.
We now have plans to migrate a larger portion of our online and offline server fleet to the Cloud over the next few years!
Thanks to my many Uber friends and colleagues for reviewing this!
April 25, 2025 • Developer
State management is at the core of every modern React application. Whether you're building a small personal project or a complex enterprise dashboard, choosing the right tool can drastically improve your code maintainability, performance, and developer experience.In this post, I’ll break down the top state management solutions for React and Next.js in 2025 — including when to use each one.🔹 1. React Context API (Built-in, Minimal)Best for: Simple global state like theme, language, or auth user.React’s built-in Context API is great when your state is minimal and doesn’t change often. But for larger or performance-sensitive apps, it can lead to unnecessary re-renders.✅ Pros:No extra dependenciesPerfect for small, static state⚠️ Cons:Performance issues with frequent updatesCan get messy without proper structure💡 Tip: Pair it with useReducer to better handle complex logic.🔹 2. Zustand (Minimal, Modern, Powerful)Best for: Medium to large apps that need a simple yet scalable solution.Zustand is a small, fast, and scalable state management library built by the creators of Jotai and React-Three-Fiber. It works great with both React and Next.js — even in SSR.✅ Pros:Super simple API (create() stores)No Provider neededSupports partial updates, async actions, middleware⚠️ Cons:You’ll still need something like React Query for data fetching/state sync// Example Zustand store const useStore = create((set) => ({ count: 0, increase: () => set((state) => ({ count: state.count + 1 })), })); 🔹 3. Redux Toolkit (Battle-Tested, Structured)Best for: Large-scale apps or teams that need strict architecture and dev tools.Redux got a major revamp with Redux Toolkit, which simplifies setup, reduces boilerplate, and offers great TypeScript support. Still widely used in enterprise apps.✅ Pros:Predictable, scalableStrong ecosystem (Redux DevTools, middleware, etc.)Good TypeScript support⚠️ Cons:More boilerplate than Zustand or JotaiOverkill for small/medium apps🔹 4. Jotai (Atomic, Fine-grained)Best for: Component-level, fine-grained reactivity without global bloat.Jotai is a modern state management library based on atomic state. It lets you manage deeply nested or reactive state in a clean, minimal way.✅ Pros:Atomic updates = high performanceGreat for local or scoped stateFull SSR support⚠️ Cons:Smaller ecosystemMay require a mindset shift from global stores🔹 5. React Query (or TanStack Query)Best for: Remote data synchronization (API fetching, caching, and mutations).React Query is technically not a state manager — it's a server-state manager. It’s perfect for apps where the main state lives on the server (APIs, databases).✅ Pros:Automatic caching, refetching, paginationBuilt-in loading/error statesExcellent DevTools⚠️ Cons:Not for UI state (use Zustand or Context for that)🔚 Final ThoughtsEvery app has different needs. Here’s a quick cheat sheet:Use CaseBest ToolSmall, simple stateReact ContextMid-sized app, local/global mixZustand or JotaiLarge app with strict structureRedux ToolkitRemote data fetchingReact QueryIf you're working with Next.js, all of these tools can be used effectively — just make sure to account for SSR/SSG where necessary.Which one are you using in your current project? I’ve personally leaned toward Zustand + React Query for most of my recent work. Clean, flexible, and efficient.Let me know if you want a deep dive on any of these!
April 19, 2025 • Isroiljon
Whether you're debugging a stubborn bug, designing an algorithm, or architecting an application, strong problem-solving skills are at the heart of being a great developer. While every problem is unique, many share common underlying patterns. Recognizing and mastering these patterns can save time, reduce stress, and level up your coding game.Here are some essential problem-solving patterns to keep in your toolkit:1. Sliding WindowUseful for problems involving arrays or strings, especially when you're working with a contiguous subset of elements.Use when:You need to find the maximum, minimum, or average of a subarray or substring of fixed or dynamic length.Example:Longest substring without repeating charactersMaximum sum subarray of size k2. Two PointersA versatile pattern where two pointers move through a data structure to compare or track elements.Use when:You need to compare pairs in a sorted array, or remove duplicates in place.Example:Reverse a stringFind pair with target sum in sorted array3. Fast and Slow Pointers (Tortoise and Hare)A variation of the two-pointer technique. Great for detecting cycles in linked lists or similar structures.Example:Detecting loops in a linked listFinding the middle of a linked list4. Divide and ConquerBreak the problem into smaller sub-problems, solve them independently, and combine the results.Use when:Recursive or tree-based problems; improves time complexity significantly in some cases.Example:Merge SortBinary Search5. BacktrackingTry all possible solutions recursively, and backtrack when you hit a dead end. Think of it as depth-first search with a constraint.Use when:You need to generate or search through combinations or permutations with constraints.Example:N-Queens problemSudoku solver6. Dynamic ProgrammingBreak the problem into sub-problems and remember solutions to avoid redundant work.Use when:You find overlapping subproblems or need to find the optimal solution under constraints.Example:Fibonacci sequence (with memoization)Knapsack problem7. GreedyMake the locally optimal choice at each step in the hope it leads to the globally optimal solution.Use when:A problem has a clear "greedy property" — you can prove that local choices lead to the best outcome.Example:Activity selectionMinimum number of coins8. Graph Traversal (BFS / DFS)Used for problems involving networks, maps, or tree structures.Use when:You're exploring paths, finding shortest paths, connected components, or checking if a route exists.Example:Maze solverSocial network friend suggestion🔁 Practice Makes Patterns StickThe best way to internalize these patterns is to practice them across different problem types. Platforms like LeetCode, Codeforces, and HackerRank are great for this.Start identifying which pattern a problem belongs to — soon, you'll be solving things faster and with more confidence.
April 28, 2025 • Alex
KirishHozirgi kunda sun'iy intellekt va mashinani o'rganish (ML) texnologiyalari juda tez rivojlanmoqda. Ko'plab kompaniyalar va ishlab chiquvchilar bu texnologiyalarni turli ilovalarda qo'llashmoqda. Biroq, sun'iy intellektni yaratish va ishga tushirish uchun ko'pincha murakkab kutubxonalar va maxsus dasturlash tillari talab qilinadi.Shu o'rinda TensorFlow.js kutubxonasi yordamga keladi. TensorFlow.js, Google tomonidan ishlab chiqilgan TensorFlow kutubxonasining JavaScript versiyasidir. U sizga brauzerda va Node.js muhitida sun'iy intellektni yaratish, o'rgatish va ishlatish imkoniyatini beradi.TensorFlow.js Nima?TensorFlow.js — bu JavaScript kutubxonasi bo'lib, u mashina o'qitish (ML) va sun'iy intellekt (AI) modellarini brauzerda yoki Node.js muhitida yaratish, o'rgatish va ishlatish imkonini beradi. TensorFlow.js yordamida siz to'g'ridan-to'g'ri JavaScriptda neural tarmoqlar yaratishingiz mumkin va bu jarayonni faqat brauzerda yoki serverda amalga oshirishingiz mumkin.TensorFlow.js o'zining katta imkoniyatlari bilan mashhurligini oshirmoqda. U yordamida siz nafaqat modelni yaratishingiz, balki mavjud modellarni ham o'rgatishingiz mumkin.TensorFlow.jsning AfzalliklariBrauzerda O'qitish va Inference (Natija olish): TensorFlow.js yordamida siz o'z modellarini to'g'ridan-to'g'ri brauzerda o'rgatishingiz yoki undan natijalar olishingiz mumkin. Bu foydalanuvchining qurilmalaridan foydalanishga imkon beradi, bu esa tezlikni oshiradi va serverga yukni kamaytiradi.Mobil Qurilmalarda Yaxshi Qo'llab-quvvatlash: TensorFlow.js brauzer orqali ishlaganligi sababli, mobil qurilmalarda ham juda yaxshi ishlaydi. Bu esa mobil ilovalarda sun'iy intellektni amalga oshirishni osonlashtiradi.Node.js bilan Integratsiya: Agar siz serverda ishlaydigan sun'iy intellekt tizimlarini yaratmoqchi bo'lsangiz, TensorFlow.js Node.js bilan ham ishlaydi. Bu sizga serverda murakkab mashina o'qitish modellarini yaratish imkonini beradi.Kengaytirilgan Modellar: TensorFlow.js TensorFlow kutubxonasining barcha imkoniyatlarini taklif qiladi. Bu sizga mashina o'qitishning eng ilg'or modellarini yaratish va ishlatish imkoniyatini beradi.TensorFlow.js Bilan Ishlashni BoshlashTensorFlow.js kutubxonasini o'rnatish juda oson. Quyidagi qadamlar yordamida siz TensorFlow.jsni o'z loyihangizga qo'shishingiz mumkin.TensorFlow.jsni O'rnatish:Agar siz Node.jsda ishlayotgan bo'lsangiz, quyidagi buyruq yordamida TensorFlow.jsni o'rnatishingiz mumkin:npm install @tensorflow/tfjsAgar brauzerda ishlayotgan bo'lsangiz, TensorFlow.jsni CDN orqali quyidagicha ulashing:<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script>TensorFlow.jsni Yaratish:Endi siz JavaScriptda sun'iy intellekt modelini yaratishingiz mumkin. Masalan, quyidagi kod yordamida sodda neyron tarmog'ini yaratish mumkin:import * as tf from '@tensorflow/tfjs'; // Modelni yaratish const model = tf.sequential(); model.add(tf.layers.dense({units: 10, inputShape: [5]})); model.add(tf.layers.dense({units: 1, activation: 'linear'})); // Modelni kompilyatsiya qilish model.compile({loss: 'meanSquaredError', optimizer: 'sgd'}); // Mashq qilish uchun ma'lumot const xs = tf.randomNormal([100, 5]); const ys = tf.randomNormal([100, 1]); // Modelni o'rgatish model.fit(xs, ys, {epochs: 10}).then(() => { // Modelni sinash model.predict(tf.randomNormal([5, 5])).print(); });Yuqoridagi kodda biz sodda bir neyron tarmog'ini yaratdik va uni tasodifiy ma'lumotlar bilan o'rgatdik.TensorFlow.js yordamida Qanday Loyiha Yaratish Mumkin?Rasmni Tanib Olish (Image Classification): TensorFlow.js yordamida rasmni tanib olish tizimini yaratish mumkin. Masalan, rasmga asoslangan tasniflash modellarini yaratish, bu juda foydali bo'lishi mumkin, masalan, xavfsizlik kameralarida, sog'liqni saqlashda yoki ijtimoiy tarmoqlarda.Matnni Tahlil Qilish (Text Analysis): TensorFlow.js yordamida matnni tahlil qilish va tuzatish modellarini yaratish mumkin. Masalan, sentiment tahlili yoki matnni tushunish tizimlarini yaratish.Rekomendatsiya Tizimi: TensorFlow.js yordamida foydalanuvchilarga tavsiyalar beruvchi tizimlar yaratish mumkin. Masalan, onlayn do'konlarda mahsulot tavsiyalarini yaratish.O'yinlar va Interaktiv Ilovalar: TensorFlow.jsni o'yinlar va interaktiv ilovalar yaratish uchun ham ishlatish mumkin. Bu, masalan, o'yinlarda AI-davomiyligini yaratish yoki foydalanuvchi harakatlarini tahlil qilish imkoniyatini beradi.XulosaTensorFlow.js — bu juda kuchli va foydali kutubxona bo'lib, JavaScript orqali mashina o'qitish va sun'iy intellektni yaratishni osonlashtiradi. Bu sizga brauzerda yoki serverda sun'iy intellekt modellarini yaratish, o'rgatish va ishlatish imkonini beradi. Agar siz veb-ishlab chiqish va sun'iy intellektni o'z loyihalaringizda qo'llashni istasangiz, TensorFlow.jsni o'rganish juda foydali bo'ladi.TensorFlow.js yordamida sun'iy intellektning imkoniyatlarini to'liq ochib bera olasiz. Endi siz ham ushbu kutubxona bilan ishlashni boshlashingiz mumkin!
April 20, 2025 • Greg
In the world of mobile app development, one name that has revolutionized the industry is React Native. Developed by Facebook, React Native allows developers to build fully functional mobile applications using JavaScript and React — but here’s the magic: it compiles to native code, meaning your app runs just like a traditional iOS or Android app.🌟 Why React Native?The key appeal of React Native lies in its “write once, use everywhere” philosophy. You write your code in JavaScript (with JSX) and React Native compiles it into platform-specific native components for both iOS and Android. This drastically reduces development time and cost.📱 How Does It Work?React Native bridges the gap between JavaScript and native APIs using a special layer called the bridge. The bridge lets JavaScript code communicate with native modules like the camera, GPS, or Bluetooth.This means:You write UI components using React.React Native uses native views (like UIView for iOS or View for Android) under the hood.You get near-native performance with JavaScript development speed.🛠️ Key Features✅ Cross-platform: One codebase for both iOS and Android.🔁 Hot Reloading: See code changes instantly without recompiling.💡 Rich Ecosystem: Integrate with native modules and third-party libraries.📦 Modular Architecture: Reuse components and separate logic easily.🧪 Strong Community: Backed by Facebook and a huge open-source community.💬 Popular Use CasesReact Native is great for:Startups building MVPs quickly.Companies maintaining both iOS and Android apps.Apps with real-time data (e.g., chat, social media, or dashboards).Popular apps built with React Native include:FacebookInstagramShopifyDiscordUber Eats🔧 Basic ExampleHere’s a simple counter app in React Native:import React, { useState } from 'react'; import { View, Text, Button } from 'react-native'; export default function App() { const [count, setCount] = useState(0); return ( <View style={{ padding: 40 }}> <Text style={{ fontSize: 24 }}>Count: {count}</Text> <Button title="Increase" onPress={() => setCount(count + 1)} /> </View> ); }🧩 React Native vs. Flutter vs. NativeFeatureReact NativeFlutterNative (Swift/Kotlin)LanguageJavaScriptDartSwift/KotlinUI PerformanceGreatExcellentBestCommunity SupportStrongGrowingStrongLearning CurveEasier for JS devsNew syntax to learnSteeperEcosystemHuge (npm)Strong (pub.dev)Official tools🧠 Final ThoughtsReact Native brings the best of both worlds: native performance and JavaScript productivity. Whether you're a startup aiming to launch fast or a developer wanting to target multiple platforms efficiently, React Native is a powerful and modern choice.Start small, build fast, and scale with ease — that’s the React Native way. 💪
April 25, 2025 • Shokhruh
So'rovnoma (polling) uzi nima ? Vebhook chi?Xozir shu blogda tepadagi savolga jovob topishga xarakat qilamizSo'rovnoma (Polling)Restoranda ovqatga buyurtma berayotganda, ofitsiantdan buyurtmangiz tayyor yoki yo'qligini doimiy ravishda so'rash maqsadga muvofiq emas va uyatli ham dur.Ushbu stsenariy so'rovning(polling) samarasizligini ko'rsatadi, bu usul mijoz oldindan belgilangan vaqt oralig'ida serverdan ma'lumotlarni qayta-qayta so'raydi.So'rov (polling) bir xizmat yangi ma'lumotlarni tekshirish uchun boshqa xizmatga takroriy so'rovlar yuborishni o'z ichiga oladi.So'rovning asosiy xususiyatlari:Takroriy so'rovlar: So'rovda mijoz muntazam ravishda serverga yangi ma'lumotlar mavjudligini so'rab so'rovlar yuboradi. Bu yondashuv doimiy ravishda "Men uchun yangi narsa bormi?" kabi so'rovlar junatadi.Resursga talabgor: so‘rovlar resurslarni ko‘p va samarasiz ketishidur. U yangilanishlar mavjud bo'lmagan taqdirda ham tarmoqni kengligi va server resurslarini sarflaydi.O'tkazib yuborilgan yangilanishlar: so'rovnoma real vaqtda yangilanishlarni o'tkazib yuborishga olib kelishi mumkin. Mijoz yangilanishlarni oldindan belgilangan vaqt oralig'ida tekshirganligi sababli, u ma'lumotlar mavjud bo'lganda darhol bilmasligi mumkin.VebhooklarVebhooklar so'rovga(polling) yanada samaraliroq va real vaqtda alternativa taklif qiladi. Vebhuklar qanday ishlaydi:Callback URL-manzil: Vebhuk yordamida siz bir tizim yangi ma'lumotlar mavjud bo'lganda boshqa tizimni xabardor qilish uchun foydalanadigan qayta qo'ng'iroq URL manzilini o'rnatasiz. Ushbu yondashuv o'rnatilgan bildirishnoma tizimiga o'xshaydi, u erda server "biror narsa sodir bo'lganda sizga xabar beraman" deb aytadi.Ma'lumotlarni yetkazib berish: Vebhook ma'lumotlarni real vaqt rejimida yangilanishini ta'minlab, ular mavjud bo'lishi bilanoq qabul qiluvchi xizmatiga yuboradi. Bu vebhuklarni o'z vaqtida bildirishnomalar kerak bo'ladigan stenariylar uchun qo'l keladi.Samaradorlik va real vaqtda yangilanishlar: Vebhuklar so'rovdan ko'ra samaraliroqdir, chunki ular takroriy so'rovlarga bo'lgan ehtiyojni yo'q qiladi. Ular real vaqt rejimida yangilanishlarni ta'minlaydi, bu ularni to'lov tizi, CI/CD platformalari va zudlik bilan yangilashni talab qiladigan boshqa xizmatlar kabi ilovalar uchun ayni qo'l keladi.So'rov va Vebhuklardan qachon foydalanish kerakHar bir yondashuv o'z foydalanish holatlariga ega:So'rov:Ma'lumotlarni tez-tez yangilash: Agar real vaqtda ma'lumotlarni yangilash kerak bo'lmasa va ma'lumotlar tez-tez yangilanib tursa, so'rov bunga yechim bo'la oladi. Har bir hodisa uchun veb-huklarni yuborish resurs talab qilishi mumkin. Misol uchun, foydalanuvchi bazasi haqida xabar bermoqchi bo'lgan ijtimoiy media ilovasi, agar ko'p foydalanuvchilar ketma-ket ro'yxatdan o'tgan bo'lsa, vebhuklarga to'lib ketishi mumkin.Moslashuvchanlik: Sinxronlash chastotasini yangilangan maʼlumotlarga qanchalik tez-tez kerakligi asosida sozlashingiz mumkin.Veb-huklar:Haqiqiy vaqtda yangilanishlar: Vebhook real vaqtda yangilanishlar muhim bo'lgan stsenariylar uchun juda mos keladi. Masalan, toʻlov bildirishnomalari, chat ilovalari va darhol bildirishnomalarni talab qiladigan har qanday xizmat.Resurslardan samarali foydalanish: Vebhook so'rovga qaraganda resurslardan foydalanish nuqtai nazaridan samaraliroq. Ular ma'lumotlarning takroriy so'rovlarsiz real vaqt rejimida yetkazilishini ta'minlaydi.Xo'sh, siz ilovangizda So'rov yoki Vebhook-dan foydalanganmisiz?
April 20, 2025 • Dev_01
In the world of web development, making rich, interactive 3D graphics is no longer reserved for specialized software. Thanks to Three.js, developers can now create stunning 3D visuals right in the browser using JavaScript. Whether you're building games, interactive simulations, or just adding some flair to your website, Three.js offers a powerful, yet easy-to-use, framework for bringing 3D experiences to life on the web.In this blog, we’ll take a closer look at Three.js, explore its capabilities, and walk through a simple example to help you get started.What is Three.js?Three.js is an open-source JavaScript library that simplifies the process of creating and rendering 3D graphics in the browser. Built on top of the WebGL API, which is responsible for rendering 3D graphics in browsers, Three.js abstracts away many of the complexities of working with WebGL directly. This makes it easier for developers to create 3D scenes, animations, and interactive visualizations without deep knowledge of lower-level graphics programming.Core Features of Three.jsThree.js offers a wide range of features for developers, some of which include:3D Geometry: Easily create 3D objects like cubes, spheres, and complex meshes.Lights and Shadows: Add realistic lighting to your scene, and configure different shadow effects.Materials and Textures: Apply materials (like basic, Lambert, and Phong) and textures to objects for a more realistic look.Camera and Viewports: Control the camera position, field of view, and other parameters to view the 3D world.Animation: Create animations for objects and scenes, including keyframe animation and tweening.Interactivity: Add interactivity using mouse and touch input, allowing users to interact with 3D objects.Post-processing: Apply special effects like bloom, depth of field, and more after rendering the scene.These features, combined with an active community, extensive documentation, and examples, make Three.js a fantastic choice for anyone looking to build 3D experiences on the web.How Does Three.js Work?Three.js simplifies the process of rendering 3D scenes by handling the complex parts of WebGL for you. The library provides an easy-to-understand API for working with scenes, cameras, objects, and renderers. Here’s a breakdown of the key components:Scene: The environment that contains all your objects, lights, and cameras.Camera: The viewpoint from which the scene is viewed. Three.js provides multiple camera types (e.g., perspective and orthographic) to suit different needs.Renderer: The engine that renders the scene to the browser’s canvas using WebGL or other rendering methods like CSS3D.Object3D: The basic building block for any 3D object in Three.js. This can be a simple object like a cube or a more complex 3D mesh.Lights: Different types of light (ambient, point, directional) can be used to illuminate the objects in the scene.Materials and Textures: Materials define how objects react to light, and textures add detail to these materials, giving them a more realistic appearance.Basic Example: A Rotating CubeLet’s go through a simple example to help you get started with Three.js. This example will render a rotating cube in the browser.Set up your project: First, you need to include Three.js in your project. You can download it from Three.js’s official website or use a CDN link. Here's an example using a CDN:<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Three.js Example</title> <style> body { margin: 0; overflow: hidden; } canvas { display: block; } </style> </head> <body> <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script> <script> // 1. Create scene const scene = new THREE.Scene(); // 2. Create camera const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000); // 3. Create renderer const renderer = new THREE.WebGLRenderer(); renderer.setSize(window.innerWidth, window.innerHeight); document.body.appendChild(renderer.domElement); // 4. Create cube geometry const geometry = new THREE.BoxGeometry(); const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 }); const cube = new THREE.Mesh(geometry, material); scene.add(cube); // 5. Set camera position camera.position.z = 5; // 6. Animation loop function animate() { requestAnimationFrame(animate); // Rotate cube cube.rotation.x += 0.01; cube.rotation.y += 0.01; // Render scene renderer.render(scene, camera); } animate(); </script> </body> </html>Explanation of the Code:Scene Creation: const scene = new THREE.Scene(); initializes the scene where all 3D objects will be added.Camera Setup: const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000); sets up the camera with a field of view (FOV) of 75 degrees, and a near and far plane of 0.1 and 1000 units respectively.Renderer: const renderer = new THREE.WebGLRenderer(); sets up the WebGL renderer to display the scene in the browser.Cube Creation: We create a cube using THREE.BoxGeometry and apply a basic green material using THREE.MeshBasicMaterial.Animation: The animate function continuously rotates the cube around its x and y axes and renders the updated scene every frame using requestAnimationFrame.Advanced Use Cases for Three.jsWhile the above example is a simple starting point, Three.js can handle much more advanced visualizations:3D Games: You can use Three.js to build fully interactive 3D games, complete with physics engines and real-time interaction.Virtual and Augmented Reality: With the help of the WebVR and WebXR APIs, Three.js can be used to create immersive VR/AR experiences directly in the browser.Data Visualizations: Three.js is commonly used to create interactive, real-time 3D data visualizations, especially in fields like data science and analytics.Animations and Simulations: Whether it’s simulating physics or creating smooth animations, Three.js makes it easy to animate objects in 3D space.3D Models and Textures: Three.js supports importing 3D models from formats like OBJ, GLTF, and FBX, and applying textures and lighting for realistic visuals.Resources to Learn MoreOfficial Three.js Documentation: https://threejs.org/docs/Three.js Examples: https://threejs.org/examples/Three.js GitHub Repository: https://github.com/mrdoob/three.js/Community and Tutorials: There are numerous tutorials and courses available on platforms like YouTube, Udemy, and freeCodeCamp that dive deeper into Three.js.ConclusionThree.js is an incredibly powerful library that opens the door for developers to create immersive, interactive 3D experiences directly in the web browser. Whether you’re interested in building 3D games, visualizing data, or adding interactive 3D elements to your site, Three.js provides the tools you need to get started quickly and efficiently.With its vast capabilities and an ever-growing community of users, Three.js is an excellent choice for anyone looking to dive into the world of 3D web development.Start experimenting with Three.js today, and see how you can bring your creative ideas to life!