Nginx Port and Domain Binding: Easily Achieve Domain Access to the Server
This article explains how to bind ports and domains in Nginx to achieve hosting multiple websites/services on a single server. The core is to distinguish different sites by "port + domain name". Nginx configures virtual hosts through the `server` block, with key directives including `listen` (port), `server_name` (domain name), `root` (file path), and `index` (home page). Prerequisites: The server needs Nginx installed, the domain name should be registered and resolved to a public IP, and the server should be tested to be accessible. Practical cases are divided into two scenarios: 1. The same domain name with different ports (e.g., binding 80 and 443 ports for `www.myblog.com`, with HTTPS certificate required for the latter); 2. Different domain names with different ports (e.g., `www.myblog.com` using port 80, `blog.myblog.com` using port 8080). Configuration files are stored in `/etc/nginx/conf.d/`, and examples should include `listen` and `server_name`. Verification: Execute `nginx -t` to check syntax, use `systemctl restart nginx` to apply changes, and verify access via a browser. Common issues: Configuration errors (check syntax), unapplied domain resolution (wait for DNS or use `nslookup`), port conflicts (change port or ...).
Read MoreCommon Nginx Commands: Essential Start, Stop, Restart, and Configuration Check for Beginners
This article introduces the core commands for Nginx daily management to help beginners get started quickly. There are two ways to start Nginx: using `nginx` for source code installation, and `sudo systemctl start nginx` for system services installed via yum/apt. Verification can be done by `ps aux | grep nginx` or accessing the test page. For stopping, there are quick stop (`nginx -s stop`, which may interrupt ongoing requests) and graceful stop (`nginx -s quit`, recommended, waiting for current requests to complete). The difference lies in whether the service is interrupted. For restarting, there are two methods: reloading the configuration (`nginx -s reload`, essential after configuration changes without interruption) and full restart (`systemctl restart`, which may cause brief interruption). Configuration checks require first verifying syntax with `nginx -t`, then applying changes with `nginx -s reload`. `nginx -T` can display the complete configuration. Common commands for beginners include start/stop, reload, and syntax checking. Note permissions, configuration paths, and log troubleshooting. Mastering these commands enables efficient daily Nginx operation and maintenance.
Read MoreNginx Beginner's Guide: Configuring an Accessible Web Server
### A Beginner's Guide to Nginx Nginx is a high-performance, lightweight web server/reverse proxy, ideal for high-concurrency scenarios. It features low resource consumption, flexible configuration, and ease of use. **Installation**: On mainstream Linux systems (Ubuntu/Debian/CentOS/RHEL), install via `apt` or `dnf`. Start and enable Nginx with `systemctl start/ enable nginx`, then verify with `systemctl status nginx` or by accessing the server's IP address. **Core Configuration**: Configuration files are located in `/etc/nginx/`, where `nginx.conf` is the main configuration file and `conf.d/` stores virtual host configurations. Create a website directory (e.g., `/var/www/html`), write an `index.html` file, and add a `server` block in `conf.d/` (specifying port 80 listening and the website directory). **Testing & Management**: After modifying configurations, use `nginx -t` to check syntax and `systemctl reload` to apply changes. Ensure port 80 is open (firewall settings) and file permissions are correct for testing access. Common commands include `start/stop/restart/reload nginx` and status checks. **Summary**
Read MoreNginx Dynamic and Static Content Separation: Speed Up and Stabilize Your Website Loading
Nginx static-dynamic separation separates static resources (images, CSS, JS, etc.) from dynamic resources (PHP, APIs, etc.). Nginx focuses on quickly returning static resources, while backend servers handle dynamic requests. This approach can improve page loading speed, reduce backend pressure, and enhance scalability (static resources can be deployed on CDNs, and dynamic requests can use load balancing). The core of implementation is distinguishing requests using Nginx's `location` directive: static resources (e.g., `.jpg`, `.js`) are directly returned using the `root` directive with specified paths; dynamic requests (e.g., `.php`) are forwarded to the backend (e.g., PHP-FPM) via `fastcgi_pass` or similar. In practice, within the `server` block of the Nginx configuration file, use `~*` to match static suffixes and set paths, and `~` to match dynamic requests and forward them to the backend. After verification, restart Nginx to apply the changes and optimize website performance.
Read MoreIntroduction to Nginx Caching: Practical Tips for Improving Website Access Speed
Nginx caching temporarily stores frequently accessed content to "trade space for time," enhancing access speed, reducing backend pressure, and saving bandwidth. It mainly includes two types: proxy caching (for static resources in reverse proxy scenarios, with origin requests to the backend) and web caching (HTTP caching, relying on the backend `Cache-Control` headers for browser local caching). Dynamic content and frequently changing content (e.g., user information, real-time data) are not recommended for caching. Configuring proxy caching requires defining paths (e.g., `proxy_cache_path`) and parameters (e.g., cache size, key rules), enabling them in `location` (e.g., `proxy_cache my_cache`), and restarting Nginx after verifying the configuration. Management involves checking cache status (logging `HIT/MISS`), clearing caches (manually deleting cache files or using the `ngx_cache_purge` module), and optimization (caching only static resources, setting `max-age` reasonably). Common issues: For cache misses, check configuration, backend headers, or permissions; for stale content, verify `Cache-Control` headers. Key points: Cache only static content, monitor hit status via logs, and prohibit caching dynamic content.
Read MoreConfiguring HTTPS in Nginx: A Step-by-Step Guide to Achieving Secure Website Access
This article introduces the necessity and practical methods of configuring HTTPS for websites. HTTPS ensures data transmission security through SSL/TLS encryption, preventing user information from being stolen. It also improves search engine rankings and user trust (since browser "insecure" prompts can affect experience), making it an essential configuration for modern websites. The core of configuration is using Let's Encrypt free certificates (obtained via the Certbot tool). On Ubuntu/Debian systems, execute `apt install certbot python3-certbot-nginx` to install Certbot and the Nginx plugin. Then, use `certbot --nginx -d example.com -d www.example.com` to obtain the certificate by specifying the domain name. Certbot will automatically configure Nginx (listening on port 443, setting SSL certificate paths, and redirecting HTTP to HTTPS). Verification methods include checking certificate status (`certbot certificates`) and accessing the HTTPS site via a browser to check the small lock icon. It is important to note certificate path, permissions, and firewall port configurations. Let's Encrypt certificates auto-renew every 90 days, which can be tested with `certbot renew --dry-run`. In summary, HTTPS configuration is simple and can enhance security, SEO, and user experience, making it an essential skill for modern websites.
Read MoreNginx Virtual Hosts: Deploying Multiple Websites on a Single Server
This article introduces the Nginx virtual host feature, which allows a single server to host multiple websites, thereby reducing costs. The core is to simulate multiple virtual servers through technology. There are three implementation methods in Nginx: domain name-based (the most common, where different domains correspond to different websites), port-based (distinguished by different ports, suitable for scenarios without additional domains), and IP-based (for servers with multiple IPs, where different IPs correspond to different websites). Before configuration, Nginx needs to be installed, website content prepared (e.g., directories `/var/www/site1` and `/var/www/site2` with homepages), and domain name resolution or test domains (optional) should be ensured. Taking the domain name-based method as an example, the steps are: create the configuration file `/etc/nginx/sites-available/site1.com`, write a `server` block (listening on port 80, matching the domain name, specifying the root directory), configure the second website similarly, create a soft link to `sites-enabled`, test with `nginx -t`, and restart Nginx. For other methods: the port-based method requires specifying a different port (e.g., 8080) in the `server` block; the IP-based method requires the server to bind multiple IPs, with the `listen` directive in the configuration file specifying the IP and port. Common issues include permissions, configuration errors, and domain name resolution, which require checking directory permissions, syntax, and confirming that the domain name points to the server's IP. In summary, Nginx's virtual host feature is a cost-effective solution for hosting multiple websites on a single server, with flexible configuration options based on domain names, ports, or IPs to meet various deployment needs.
Read MoreNginx Static Resource Service: Rapid Setup for Image/File Access
Nginx is suitable for hosting static resources such as images and CSS due to its high performance, lightness, stability, and strong concurrency capabilities, which enhances access speed and saves server resources. For installation, run `sudo apt install nginx` on Ubuntu/Debian and `sudo yum install nginx` on CentOS/RHEL. After startup, access `localhost` to verify. For core configuration, create `static.conf` in `/etc/nginx/conf.d/`. Example: Listen on port 80, use `location` to match paths (e.g., `/images/` and `/files/`), specify the resource root directory with `root`, and enable directory browsing with `autoindex on` (with options to set size and time display). During testing, create `images` and `files` directories under `/var/www/static`, place files in them, run `nginx -t` to check configuration, and reload Nginx with `systemctl reload nginx` to apply changes. Then test access via `localhost/images/xxx.jpg` or `localhost/files/xxx.pdf`. Key considerations include Nginx user permissions and configuration reload effectiveness. Setting up Nginx for static resource service is simple, with core configuration paths and directory browsing functionality, ideal for rapid static resource hosting. It can be extended with features like image compression and anti-leeching.
Read MoreNginx Load Balancing: Simple Configuration for Multi-Server Traffic Distribution
This article introduces Nginx load balancing configuration to solve the problem of excessive load on a single server. At least two backend servers running the same service are required, with Nginx installed and the backend ports open. The core configuration consists of two steps: first, define the backend server group using `upstream` (supporting round-robin, weight, and health checks, e.g., `server 192.168.1.100:8080 weight=2;` or `max_fails=2 fail_timeout=10s`); second, configure `proxy_pass` to this group in the `server` block, passing the client's `Host` and real IP (`proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr;`). Verification involves running `nginx -t` to check syntax, `nginx -s reload` to restart, and testing access to confirm request distribution. Common issues such as unresponsive backends or configuration errors can be resolved by checking firewalls and logs. Advanced strategies include IP hashing (`ip_hash`) and URL hashing (requires additional module).
Read MoreIntroduction to Nginx Reverse Proxy: Easily Achieve Frontend-Backend Separation
In a web front-end and back-end separation architecture, Nginx reverse proxy can solve problems such as cross-origin issues, complex domain name management, and back-end exposure. The reverse proxy acts as an intermediary server, so users access the back-end real service by visiting Nginx, which is transparent to users. When front-end and back-end are separated, reverse proxy can unify domain names (users only need to remember one domain name), hide the back-end address (enhancing security), and distribute requests by path (e.g., `/` for the front-end and `/api` for the back-end). Nginx is simple to install (Ubuntu uses `apt install nginx`, CentOS uses `yum install nginx`). The core of configuration is the `location` block: the front-end static files use `root` and `index` to point to the front-end directory, while the back-end API uses `proxy_pass` to forward to the real address, with `proxy_set_header` to pass header information. In practice, place the front-end files in the Nginx directory. After the back-end service is started, use `location` to distinguish paths. Nginx intercepts requests and forwards them, allowing users to complete front-end and back-end interaction by accessing a single domain name. Reverse proxy also supports extended functions such as load balancing and caching, making it a key tool in front-end and back-end separation architectures.
Read MoreDetailed Explanation of Nginx Configuration Files: Server Block and Location for Beginners
The core of Nginx configuration lies in Server blocks (virtual hosts) and location blocks (path distribution). The main configuration file (nginx.conf) includes the global context (with directives like worker_processes), the events context (with worker_connections), and the http context (which contains multiple Server blocks). A Server block defines a website using directives such as listen (port), server_name (domain name), root (root directory), and index (homepage). Location blocks match requests based on paths, supporting prefix, exact, regular expression, and other types, with priority order: exact match > prefix with ^~ > ordinary prefix > regular expression > default. After configuration, use `nginx -t` to verify syntax and `nginx -s reload` to apply changes. After mastering basic configurations (port, domain name, static path), beginners can progressively learn advanced features like dynamic request forwarding and caching.
Read MoreLearn Nginx from Scratch: A Step-by-Step Guide to Installation and Startup
This article introduces the basics of learning Nginx, emphasizing its lightweight, efficient, and flexible configuration, making it suitable for web server setup. The content includes: Nginx supports Windows and Linux systems. Installation is explained using Ubuntu/Debian and CentOS/RHEL as examples: for Ubuntu, run `apt update` followed by `apt install nginx`; for CentOS, first install the EPEL repository and then use `yum install nginx`. After starting with `systemctl start nginx`, access `localhost` to verify a successful default welcome page display. The core configuration files are located in `/etc/nginx/`, where the `default` configuration file defines listening on port 80, the root directory `/var/www/html`, etc. Common commands include starting/stopping, reloading, and syntax checking. It also mentions common troubleshooting (port conflicts, configuration errors) and methods for customizing the homepage. For Windows installation, download, extract, and start via command line. Finally, it encourages hands-on practice to master advanced features.
Read MoreNode.js File System: Quick Reference Guide for Common fs Module APIs
# Node.js File System: Quick Reference for the fs Module This article introduces the core APIs of the `fs` module in Node.js, helping beginners quickly get started with file operations. The `fs` module provides both synchronous and asynchronous APIs: synchronous methods (e.g., `readFileSync`) block execution and are suitable for simple scripts, while asynchronous methods (e.g., `readFile`) are non-blocking and handle results via callbacks, making them ideal for high-concurrency scenarios. Common APIs include: reading files with `readFile` (asynchronous) or `readFileSync` (synchronous); writing with `writeFile` (overwrite mode); creating directories with `mkdir` (supports recursive creation); deleting files/directories with `unlink`/`rmdir` (non-empty directories require `fs.rm` with `recursive: true`); reading directories with `readdir`; getting file information with `stat`; and checking existence with `existsSync`. Advanced tips: Use the `path` module for path handling; always check for errors in asynchronous operations; optimize memory usage for large files with streams; and be mindful of file permissions. Mastering the basic APIs will cover most common scenarios, with further learning needed for complex operations like stream processing.
Read MoreNon-blocking I/O in Node.js: Underlying Principles for High-Concurrency Scenarios
This article focuses on explaining Node.js non-blocking I/O and its advantages. Traditional synchronous blocking I/O causes programs to wait for I/O completion, leaving the CPU idle and resulting in extremely low efficiency under high concurrency. Non-blocking I/O, by contrast, initiates a request without waiting, immediately executing other tasks, and notifies completion through callback functions, which are uniformly scheduled by the event loop. Node.js implements non-blocking I/O through the event loop and the libuv library: asynchronous I/O requests are handed over to the kernel (e.g., Linux epoll) by libuv. The kernel monitors I/O completion status, and upon completion, the corresponding callback is added to the task queue. The main thread is not blocked and can continue processing other tasks. Its high concurrency capability arises from: a single-threaded JS engine that does not block, with a large number of I/O requests waiting concurrently. The total time consumed is only the average time per single request, not the sum. libuv abstracts cross-platform I/O models and maintains an event loop (handling microtasks, macrotasks, and I/O callbacks) to uniformly schedule callbacks. Non-blocking I/O enables Node.js to excel in scenarios such as web servers, real-time communication, and I/O-intensive data processing. It is the core of Node.js's high concurrency handling, efficiently supporting tasks like front-end engineering and API services.
Read MoreNode.js REPL Environment: An Efficient Tool for Interactive Programming
The Node.js REPL (Read-Eval-Print Loop) is an interactive programming environment that provides immediate feedback through an input-execute-output loop, making it suitable for learning and debugging. To start, install Node.js and enter `node` in the terminal, where you'll see the `>` prompt. Basic operations include simple calculations (e.g., `1+1`), variable definition (`var message = "Hello"`), and testing functions/APIs (e.g., `add(2,3)` or the array `map` method). Common commands are `.help` (view commands), `.exit` (quit), `.clear` (clear), `.save`/`.load` (file operations), with support for arrow key history navigation and Tab auto-completion. The REPL enables quick debugging, API testing (e.g., `fs` module), and temporary script execution. Note that variables are session-specific, making it ideal for rapid validation rather than large-scale project development. It serves as an efficient tool for Node.js learning, accelerating code verification and debugging.
Read MoreNode.js构建RESTful API:路由与响应实战
本文介绍了使用Node.js和Express构建RESTful API的核心流程。Node.js凭借非阻塞I/O和单线程模型适合高并发服务,配合Express框架轻量高效,适合入门。 准备工作需安装Node.js(推荐LTS版)并初始化项目,通过`npm install express`安装框架。核心是用Express创建服务:引入框架,创建实例,定义路由。通过`app.get()`等方法处理不同HTTP请求(GET/POST/PUT/DELETE),`express.json()`中间件解析JSON请求体。各方法对应不同操作:GET获取资源,POST创建,PUT更新,DELETE删除,使用路由参数和请求体传递数据,设置200、201、404等状态码返回结果。 进阶内容包括路由模块化(拆分路由文件)和404处理,最后可通过Postman或curl测试API。掌握后可连接数据库扩展功能,完成基础API构建。
Read MoreFrontend Developers Learning Node.js: The Mindset Shift from Browser to Server
This article introduces the necessity and core points for front-end developers to learn Node.js. Based on Google Chrome's V8 engine, Node.js enables JavaScript to run on the server-side, overcoming the limitations of front-end developers in building back-end services and enabling full-stack development. Its core features include "non-blocking I/O" (handling concurrent requests through the event loop), "full-access" environment (capable of operating on files and ports), and the "CommonJS module system". For front-end developers transitioning to back-end roles, mindset shifts are required: changing from the sandboxed (API-limited) runtime environment to a full-access environment; transforming asynchronous programming from an auxiliary task (e.g., setTimeout) to a core design principle (to avoid server blocking); and adjusting from ES Modules to CommonJS (require/module.exports) for module systems. The learning path includes: mastering foundational modules (fs, http), understanding asynchronous programming (callbacks/Promise/async), developing APIs with frameworks like Express, and exploring the underlying principles of tools such as Webpack and Babel. In summary, Node.js empowers front-end developers to build full-stack capabilities without switching programming languages, enabling them to understand server-side logic and expand career horizons. It is a key tool for bridging the gap between front-end and back-end development.
Read MoreNode.js Buffer: An Introduction to Handling Binary Data
In Node.js, when dealing with binary data such as images and network transmission data, the Buffer is a core tool for efficiently storing and manipulating byte streams. It is a fixed-length array of bytes, where each element is an integer between 0 and 255. Buffer cannot be dynamically expanded and serves as the foundation for I/O operations. There are three ways to create a Buffer: `Buffer.alloc(size)` (specifies the length and initializes it to 0), `Buffer.from(array)` (converts an array to a Buffer), and `Buffer.from(string, encoding)` (converts a string to a Buffer, requiring an encoding like utf8 to be specified). A Buffer can read and write bytes via indices, obtain its length using the `length` property, convert to a string with `buf.toString(encoding)`, and concatenate Buffers using `Buffer.concat([buf1, buf2])`. Common methods include `write()` (to write a string) and `slice()` (to extract a portion). Applications include file processing, network communication, and database BLOB operations. It is important to note encoding consistency (e.g., matching utf8 and base64 conversions), avoid overflow (values exceeding 255 will be truncated), and manage off-heap memory reasonably to prevent leaks. Mastering Buffer is crucial for understanding Node.js binary data processing.
Read MoreA Guide to Using `exports` and `require` in Node.js Module System
The Node.js module system enables code reuse, organization, and avoids global pollution by splitting files. Each .js file is an independent module; content inside is private by default and must be exposed via exports. Exports can be done through `exports` (mounting properties) or `module.exports` (directly assigning an object), with the latter being the recommended approach (as `exports` is a reference to it). Imports use `require`, with local modules requiring relative paths and third-party modules directly using package names. Mastering export and import is fundamental to Node.js development and enhances code organization capabilities.
Read MoreWhat Can Node.js Do? 5 Must-Do Practical Projects for Beginners
Node.js is a tool based on Chrome's V8 engine that enables JavaScript to run on the server side. Its core advantages are non-blocking I/O and event-driven architecture, making it suitable for handling high-concurrency asynchronous tasks. It has a wide range of application scenarios: developing web applications (e.g., with Express/Koa frameworks), API interfaces, real-time applications (e.g., real-time messaging using Socket.io), command-line tools, and data analysis/crawlers. For beginners, the article recommends 5 practical projects: a personal blog (using Express + EJS + file reading/writing), a command-line to-do list (using commander + JSON storage), a RESTful API (using Express + JSON data), a real-time chat application (using Socket.io), and a weather query tool (using axios + third-party APIs). These projects cover core knowledge points such as route design, asynchronous operations, and real-time communication. In summary, it emphasizes that starting with Node.js requires hands-on practice. Completing these projects allows gradual mastery of key skills. It is recommended to begin with simple projects, consistently practice by consulting documentation and referring to examples, and quickly enhance practical capabilities.
Read MoreNode.js Event Loop: Why Is It So Fast?
This article uses the analogy of a coffee shop waiter to explain the core mechanism of Node.js for efficiently handling concurrent requests—the event loop. Despite being single-threaded, Node.js can process a large number of concurrent requests efficiently, with the key lying in the collaboration between non-blocking I/O and the event loop: when executing asynchronous operations (such as file reading and network requests), Node.js delegates the task to the underlying libuv library and immediately responds to other requests. Once the operation is completed, the callback function is placed into the task queue. The event loop is the core scheduler, processing tasks in fixed phases: starting with timer callbacks (Timers), system callbacks (Pending Callbacks), followed by the crucial Poll phase to wait for I/O events, and then handling immediate callbacks (Check) and close callbacks (Close Callbacks). It ensures the ordered execution of asynchronous tasks through the call stack, task queues, and phase-based processing. The efficient design stems from three points: non-blocking I/O avoids CPU waiting, callback scheduling is executed in an ordered manner across phases, and the combination of single-threaded execution with asynchronous concurrency achieves high throughput. Understanding the scheduling logic of the event loop helps developers write more efficient Node.js code.
Read MoreWriting Your First Web Server with Node.js: A Quick Start with the Express Framework
This article introduces the method of building a web server using Node.js and Express. Based on the V8 engine, Node.js enables JavaScript to run on the server side, while Express, as a popular framework, simplifies complex tasks such as routing and request handling. For environment preparation, first install Node.js (including npm), and verify it using `node -v` and `npm -v`. Next, create a project folder, initialize it with `npm init -y`, and install the framework with `npm install express`. The core step is writing `server.js`: import Express, create an instance, define a port (e.g., 3000), use `app.get('/')` to define a GET request for the root path and return text, then start the server with `app.listen`. Access `http://localhost:3000` to test it. Extended features include adding more routes (e.g., `/about`), dynamic path parameters, returning JSON (`res.json()`), and hosting static files (`express.static`). The key steps are summarized as: installing tools, creating a project, writing routes, and starting the test, laying the foundation for subsequent learning of middleware, dynamic routing, etc.
Read MoreIntroduction to Node.js Asynchronous Programming: Callback Functions and Promise Basics
Node.js, due to JavaScript's single-threaded nature, requires asynchronous programming to handle high-concurrency I/O operations (such as file reading and network requests). Otherwise, synchronous operations will block the main thread, leading to poor performance. The core of asynchronous programming is to ensure that time-consuming operations do not block the main thread and that results are notified via callbacks or Promises upon completion. Callback functions were the foundation of early asynchronous programming. For example, the callback of `fs.readFile` receives `err` and `data`, which is simple and intuitive but prone to "callback hell" (with deep nesting and poor readability). Error handling requires repetitive `if (err)` checks. Promises address callback hell by being created with `new Promise`, having states: pending (in progress), fulfilled (success), and rejected (failure). They enable linear and readable asynchronous code through `.then()` chaining and centralized error handling with `.catch()`, laying the groundwork for subsequent `async/await`. Core Value: Callback functions are foundational, Promises enhance readability, and asynchronous thinking is key to building efficient Node.js programs.
Read MoreDetailed Explanation of Node.js Core Module fs: Easily Implement File Reading and Writing
The `fs` module in Node.js is a core tool for interacting with the file system, supporting both synchronous and asynchronous APIs. Synchronous methods block code execution, while asynchronous methods are non-blocking and suitable for high concurrency; beginners are advised to prioritize learning asynchronous operations first. Basic operations include file reading and writing: for asynchronous reading, use `readFile` (requiring callbacks to handle errors and data), and for synchronous reading, use `readFileSync` (requiring `try/catch` blocks). Writing can be either overwriting (`writeFile`) or appending (`appendFile`). Directory operations include `mkdir` (supports recursive creation), `readdir` (lists directory contents), and `rmdir` (only removes empty directories). Path handling should use the `path` module. It is recommended to combine `__dirname` (the directory where the script is located) to construct absolute paths, avoiding reliance on relative paths that depend on the execution location. For large file processing, streams (Stream) should be used to read/write data in chunks and avoid excessive memory usage. Common issues: Path errors can be resolved by using absolute paths, and large files should be processed with the `pipe` stream method. Practical suggestions include starting with simple read/write and directory operations, combining them with the `path` module, and understanding the advantages of the asynchronous non-blocking model.
Read MoreNode.js npm Tools: A Comprehensive Guide from Installation to Package Management
This article introduces the core knowledge of Node.js and npm. Node.js is a JavaScript runtime environment based on Chrome's V8 engine, and npm is its default package management tool for downloading, installing, and managing third-party code packages. **Installation**: Node.js can be installed on Windows, Mac, and Linux systems via the official website or package managers (npm is installed alongside Node.js). After installation, verify with `node -v` and `npm -v`. **Core npm Functions**: - Initialize a project with `npm init` to generate `package.json` (project configuration file). - Install dependencies: local (default, project-only) or global (`-g`, system-wide); categorized as production (`--save`) or development (`--save-dev`) dependencies. - Manage dependencies: view, update, uninstall (`npm uninstall`), etc. **Common Commands**: `npm install` (Install), `npm list` (View), `npm update` (Update), etc. For slow domestic access, accelerate with Taobao mirror (`npm config set registry`) or cnpm. **Notes**: Avoid committing `node_modules` to Git, use version numbers (`^` or `~`) reasonably, and prioritize local dependency installation. npm is a core tool for Node.js development; mastering its usage enhances efficiency.
Read More