NOTE: Prints a standard "Hello, World!" program in C.
#include <stdio.h>
int main()
{
printf("Hello, World!");
return 0;
}
First, the C preprocessor cpp expands all those macros definitions and include statements (and anything else that starts with a #) and passes the result to the actual compiler. The preprocessor is not so interesting because it just replaces some short cuts you used in your code with more code. The output of cpp is just C code; if you didn't have any preprocessor statements in your file, you wouldn't need to run cpp. The preprocessor does not require any knowledge about the target architecture. If you had the correct include files, you could preprocess your C files on a LINUX machine and take the output to the instructional machines and pass that to cc. To see the output of a preprocessed file, use cc -E.
The compiler effectively translates preprocessed C code into assembly code, performing various optimizations along the way as well as register allocation. Since a compiler generates assembly code specific to a particular architecture, you cannot use the assembly output of cc from an Intel Pentium machine on one of the instructional machines (Digital Alpha machines).
The assembly code generated by the compilation step is then passed to the assembler which translates it into machine code; the resulting file is called an object file. On the instructional machines, both cc and gcc use the native assembler as that is provided by UNIX. You could write an assembly language program and pass it directly to as and even to cc (this is what we do in project 2 with sys.s). An object file is a binary representation of your program. The assembler gives a memory location to each variable and instruction; we will see later that these memory locations are actually represented symbolically or via offsets. It also make a lists of all the unresolved references that presumably will be defined in other object file or libraries, e.g. printf. A typical object file contains the program text (instructions) and data (constants and strings), information about instructions and data that depend on absolute addresses, a symbol table of unresolved references, and possibly some debugging information. The UNIX command nm allows you to look at the symbols (both defined and unresolved) in an object file.
Since an object file will be linked with other object files and libraries to produce a program, the assembler cannot assign absolute memory locations to all the instructions and data in a file. Rather, it writes some notes in the object file about how it assumed things were layed out. It is the job of the linker to use these notes to assign absolute memory locations to everything and resolve any unresolved references. Again, both cc and gcc on the instructional machines use the native linker, ld. Some compilers chose to have their own linkers, so that optimizations can be performed at link time; one such optimization is that of aligning procedures on page boundaries. The linker produces a binary executable that can be run from the command interface.
Notice that you could invoke each of the above steps by hand. Since it is an annoyance to call each part separately as well as pass the correct flags and files, cc does this for you. For example, you could run the entire process by hand by invoking /lib/cpp and then cc -S and then /bin/as and finally ld. If you think this is easy, try compiling a simple program in this way.
When you type a.out at the command line, a whole bunch of things must happen before your program is actually run. The loader magically does these things for you. On UNIX systems, the loader creates a process. This involves reading the file and creating an address space for the process. Page table entries for the instructions, data and program stack are created and the register set is initialized. Then the loader executes a jump instruction to the first instruction in the program. This generally causes a page fault and the first page of your instructions is brought into memory. On some systems the loader is a little more interesting. For example, on systems like Windows NT that provide support for dynamically loaded libraries (DLLs), the loader must resolve references to such libraries similar to the way a linker does.
Figure 2 illustrates a typical layout for program memory. It is the job of the loader to map the program, static data (including globals and strings) and the stack to physical addresses. Notice that the stack is mapped to the high addresses and grows down and the program and data are mapped to the low addresses. The area labeled heap is where the data you allocate via malloc is placed. A call to malloc may use the sbrk system call to add more physical pages to the program's address space (for more information on malloc, free and sbrk, see the man pages).
A call to a procedure is a context switch in your program. Just like any other context switch, some state must be saved by the calling procedure, or caller, so that when the called procedure, or callee, returns the caller may continue execution without distraction. To enable separate compilation, a compiler must follow a set of rules for use of the registers when calling procedures. This procedure call convention may be different across compilers (does cc and gcc use the same calling convention?) which is why object files created by one compiler cannot always be linked with that of another compiler. A typical calling convention involves action on the part of the caller and the callee. The caller places the arguments to the callee in some agreed upon place; this place is usually a few registers and the extras are passed on the stack (the stack pointer may need to be updated). Then the caller saves the value of any registers it will need after the call and jumps to the callee's first instruction. The callee then allocates memory for its stack frame and saves any registers who's values are guaranteed to be unaltered through a procedure call, e.g. return address. When the callee is ready to return, it places the return value, if any, in a special register and restores the callee-saved registers. It then pops the stack frame and jumps to the return address.
Microsoft Visual Studio Communitys compiler is called cl.exe, and can be invoked through the commandline by calling cl.exe <filename> - where filename is a CSV of sourcefile(s) you are compiling.
cl.exe Hello.cppThis creates an Hello.obj which is callable by Hello.exe.
Hardware - Wrapped in the assembly language in the OS in the first ~4kb memory for the bootloader (BIOS)
Bootloader - the whole brain of the OS
Win32 SDK - MS created a wrapper around C language for native developers (GUI)
MFC - Microsoft Foundation Code - a C++ wrapper for the SDK
Component Object Model - COM is a binary-interface standard for software components
COM+ - Is another wrapper for the COM from WinNT mostly implemented into COM with Win2000
JVM / CLR - Java Virtual Machine, .NET (not for driver development)
Web development -
* If you write something in eg. CLR (.NET) it has to convert the code through each layer below before it's runnable.
WinRT - Windows Runtime (launched in 2012 for mobile, nearer to COM, it's around COM+)
Nano-COM (a.k.a XPCOM) ..?
Is a spesification by Silicon Group (founder) and Khronos Group (standardization).
Fixed Functional Pipeline (eg. ~OpenGL 3.0) - you can't change the pipeline
More on legacy OpenGL: doesn't support reading and writing graphical images (PNG, JPG, etc).
Programmable Pipeline (eg. Vulkan, modern OpenGL) - you can customize the pipeline
Eg. Cooking chicken - you add the Chicken (VS) → Masala (TS) → Salt (GS) → Each piece (FS)
Further improvements: - Define the fixed functional pipeline in more details
Legacy OpenGL uses the fixed functional pipeline ...
GLUT (OpenGL Utility Toolkit) is a deprecated application abstraction layer to render OpenGL code, but it's widely used in older projects.
To set up your Visual Studio Community 2019 to use freeglut download the library from http://freeglut.sourceforge.net/, extract it in a folder like C:\libraries\freeglut\ and add it to your projects settings found under Project <%YOUR_PROJECT_NAME%> Properties on the file menu.
You have to tell the compiler you want it to look for external libraries by link the library folder under the settings found under:
C/C++ > General > Additional Include Directoriespointing to the folder where you extracted freeglut and
Linker > General > Additional Library Directories .Remove the commandline window under Linker > System > SubSystem by expanding the menu on the right side and selecting Windows (/SUBSYSTEM:WINDOWS).
To compile and run this project you have to set the application main entry point found under Project > YOUR_PROJECT_NAME Properties > Linker > Advanced > Entry Point and set it to mainCRTStartup.
Screen coordinates { 0.0, 0.0 } is in the upper-left corner on Windows, but in the lower-left corner (anti-clockwise) in OpenGL.
Indicies are counted from lower left corner, so given a square:
float vertices[] = {
-0.5, 0.5, 0.0, // 1st - 0 0 +-------+ 3
-0.5, -0.5, 0.0, // 2nd - 1 | |
0.5, -0.5, 0.0, // 3rd - 2 | | <-- Covers half the screen
0.5, 0.5, 0.0 // 4th - 3 1 +-------+ 2
};
int indices = [3, 2, 1, 3, 1, 0]
Will create a square. If you remove 3, 2, 1 it will be a triangle in the upper right corner etc.
Transform default viewspace to follow windows coordinates with glTranslatef(0.0f, -h, 0.0f).
NOTE: Full implementation of displaying a colored triangle in the fixed functional pipeline in ~OpenGL3.0 using freeglut.
Take a look at glutEnterGameMode() / glutLeaveGameMode() and glutGameModeString( “990×768:32@75” )
** Using the .cpp extension on sourcefile to make Visual Studio recognize it as C++ code to compile C in Cpp.
#include <GL/freeglut.h>
void initialize();
void display(void);
void resize(int, int);
void keyboard(unsigned char, int, int);
void mouse(int, int, int, int);
bool bIsFullscreen = false;
int main(int argc, char* argv[])
{
glutInit(&argc, argv); // Initialize glut with commandline args
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGBA); // GLUT_SINGLE single framebuffer instance, show this to the user. GLUT_RGBA is the color schema
glutInitWindowSize(800, 600); // Set the program width and height
glutInitWindowPosition(100, 100); // Position the app from the top left corner
glutCreateWindow("C-OpenGL first triangle"); // Create the app window based on the previous params
initialize(); // Giving a call to glClear color (glClear takes this call to the framebuffer)
glutDisplayFunc(display); // Display something where you will be rendering everything
glutReshapeFunc(resize); // Resize the window (unhandled in this project)
glutKeyboardFunc(keyboard); // Callback to the keyboard
glutMouseFunc(mouse); // Callback to the mouse
glutMainLoop(); // The program main loop
return 0; // ANSI C requires that main function returns an int
}
void initialize()
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
// Transform matrix
glMatrixMode(GL_MODELVIEW); // Set model projection for viewspace
glLoadIdentity(); // OpenGL follows anti-clockwise positioning (lower left corner is (0.0f, 0.0f))
// Note on legacy OpenGL code:
// All drawcalls ( > OpenGL3.0 ) must be placed inbetween calls to glBegin and glEnd
glBegin(GL_TRIANGLES);
// Transformation is handled by glLoadIdentity
glColor3f(1.0f, 0.0f, 0.0f); // Set the color for vertex[0] to RED
glVertex2f(0.0f, 1.0f); // vertex[0] - top
glColor3f(0.0f, 1.0f, 0.0f); // Set the color for vertex[1] to BLUE
glVertex2f(-1.0f, -1.0f); // vertex[1] - bottom left
glColor3f(0.0f, 0.0f, 1.0f); // Set the color for vertex[2] to GREEN
glVertex2f(1.0f, -1.0f); // vertex[2] - bottom right
glEnd();
glFlush(); // Single framebuffer, so needs a flush!
}
// callback function for window resize events
void resize(int width, int height)
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
}
// callback function for keyboard input
void keyboard(unsigned char key, int x, int y)
{
switch (key) {
case 27: // ESC
glutLeaveMainLoop();
break;
case 'f':
case 'F':
if (bIsFullscreen == false) {
glutFullScreen();
bIsFullscreen = true;
}
else {
glutLeaveFullScreen();
bIsFullscreen = false;
}
break;
}
}
// callback function for mouse handling
void mouse(int button, int state, int x, int y)
{
switch (button) {
case GLUT_RIGHT_BUTTON:
glutLeaveMainLoop();
break;
}
}
Add something about: glutSpecialFunc(keyboard2) and vod keyboard2(int key, int x, int y) {} and dive deeper into an example of glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH). Give an example of a complete game written using all FreeGLUT functions and detail glutGetModifiers();. Also detail glutMotionFunc(myMouseMotion) and void myMouseMotion(int x, int y) {} to handle mouse motion. FreeGLUT will only call this function after a mouse button is pressed and the mouse is moved. To detect when a mouse cursor is hovering over an object without being clicked FreeGLUT has glutPassiveMotionFunc(myMousePassive); and void myMousePassive(int x, int y) {}. Note: All this functions uses the top-left corner of the window for { 0, 0 } ! Detail more about glutIdleFunc(myIdle); and void myIdle() {}. Then there is core functions like: glutInitContextFlags(GLUT_CORE_PROFILE);, glutInitContextVersion(4, 3);, glutInitContextProfile(GLUT_FORWARD_COMPATIBILE); and compatibility functions like: glutInitContextFlags(GLUT_COMPATIBILITY_PROFILE | GLUT_CORE_PROFILE | GLUT_DEBUG);. NOTE: Setting a modern OpenGL version requires GLEW! There is also glutGet(GLUT_ELAPSED_TIME); to find elapsed time since last call to glutGet(). Even further there is glutPostRedisplay() which can be called in the idle-function to redisplay the current frame.
...
WinMain is the (C programming) main entrance, where WINAPI is the namespace for the Windows API.
You have two types of programmers, commandline and Windows OS targeted GUI application developers.
Linux uses camelCase, but Windows follows hungerian notation.
In order to create an instance of a Windows application using the Windows API you have to include windows.h, which comes bundled with Windows.
#include <windows.h>
LRESULT CALLBACK WndProc (HWND, UINT, WPARAM, LPARAM);
int WINAPI WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow) {
// Declaration for a WIN32 SDK app (first step)
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT( "Win32-API-SDK" );
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc; // Registering to the callback (handles all events)
wndclass.hInstance = hInstance; // Registering the instance for this window
// Register the class to the OS (second step)
RegisterClassEx(&wndclass);
// WND_CLASS = RegisterClass, WND_CLASSEX = RegisterClassEx, also for extra win options
hwnd = CreateWindow (
szAppName, // Giving the CreateWindow-function your instance classname
TEXT("Win32-API-SDK"), // Caption of the window (titlebar)
WS_OVERLAPPEDWINDOW, // Contains 6 styles - WS_CAPTION, WS_OVERLAPPED, WS_SYSMENU (icon left corner), WS_THICKFRAME, WS_MINIMIZEDBOX, WS_MAXIMIZEDBOX
CW_USEDEFAULT, // Starting X of the window
CW_USEDEFAULT, // Starting Y
CW_USEDEFAULT, // Starting width
CW_USEDEFAULT, // Starting height
NULL, // Do you have a partent window, NULL = OS IS PARENT
NULL, // Any menus; NULL = NO
hInstance, // Current instance of the app
NULL // Used in API hooking, but not much used today
);
// Event loop goes here
return 0;
}
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM)
{
}
HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow
HINSTANCE hInstance is the an unique id given by the OS to keep track of running status of each instance which the OS keeps track of.
HINTANCE hPrevInstance is (co-operative multitask, realtime multitask) is kept for backwards compatabilites. Not used because we have a lots of computerpower.
LPSTR lpCmdLine cmdline args (used later...)
int iCmdShow ()
WNDCLASSEX wndclass; to create an instance of the application.
HWND hwnd is the unique handle to the window application, and child apps have their separate handle.
MSG msg; get back to this
TCHAR szAppName[] = TEXT( "Win32-API-SDK" );
wndclass.cbSize = sizeof(WNDCLASSEX); declaring the structure mem size.
wndclass.style = CS_HREDRAW | CS_VREDRAW; - horizontal and vertical redraw.
wndclass.cbClsExtra = 0; Extra info about the class, can be used to make circuler windows
wndclass.cbWndExtra = 0; Extra info about the window
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION); win32 api, first param: handle to window (parent window of the app, 0 = OS is parent), second param: IDI_APPLICATION is default app icon.
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW); win32 api call, first is parent, 0 = OS, IDC_ARROW is default built in.
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH) sets the app background color, GetStockObject returns HBRUSH
wndclass.lpszClassName = szAppName; binding the app name to the app
wndclass.lpszMenuName = NULL; Do you need filemenu and more.
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION) the icon you see on the taskbar
wndclass.lpfnWndProc = WndProc; // Registering to the callback (handles all events)
wndclass.hInstance = hInstance; // Registering the instance for this window
Register the class to the OS (Second step)
// WND_CLASS = RegisterClass, WND_CLASSEX = RegisterClassEx, also for extra win options
Physical memory address - ALL available memory. OS occupies the start eg. 0 - 440.
Virtual memory address - your programs memory, physically begins from 440, but is mapped to virtual 0-60
CreateWindowA - A = ANSI, CreateWindow - Unicode.
The paramter list in the CreateWindow takes the following variables (re-write this to explain what the params are!) (and add that: All present parameters in CreateWindow are the same as those found in CREATESTRUCT)
szAppName, // Giving the CreateWindow-function your instance classname
TEXT("Win32-API-SDK"), // Caption of the window (titlebar)
WS_OVERLAPPEDWINDOW, // Contains 6 styles - WS_CAPTION, WS_OVERLAPPED, WS_SYSMENU (icon left corner), WS_THICKFRAME, WS_MINIMIZEDBOX, WS_MAXIMIZEDBOX
CW_USEDEFAULT, // Starting X of the window
CW_USEDEFAULT, // Starting Y
CW_USEDEFAULT, // Starting width
CW_USEDEFAULT, // Starting height
NULL, // Do you have a partent window, NULL = OS IS PARENT
NULL, // Any menus; NULL = NO
hInstance, // Current instance of the app
NULL // Used in API hooking, but not much used
This is where you point your application handle to the Window class...
hwnd = CreateWindow ( szAppName, // Giving the CreateWindow function your instance classname TEXT("Win32-API-SDK"), // Titlebar window caption WS_OVERLAPPEDWINDOW, // Contains 6 styles - WS_CAPTION, WS_OVERLAPPED, WS_SYSMENU (icon left corner), WS_THICKFRAME, WS_MINIMIZEDBOX, WS_MAXIMIZEDBOX CW_USEDEFAULT, // Starting X of the window CW_USEDEFAULT, // Starting Y CW_USEDEFAULT, // Starting width CW_USEDEFAULT, // Starting height NULL, // Do you have a partent window, NULL = OS IS PARENT NULL, // Any menus? NULL = No hInstance, // Current instance of the app NULL // Used in API hooking, but not much used );
After you have defined the class' instance it's time to tell your application to show the application on screen.
ShowWindow(
hwnd, // Send your handle to the ShowWindow
SW_NORMAL // Show Window Normally (see MSDN documentation for further options)
);
You then tell your OS to handle all events in your applications handle as events in the form of messages.
UpdateWindow(
hwnd // Give the handle for the window to the OS
);
Then you make an event poll to handle you applications user actions. Think of this event loop as the heart of the application, running in an infinite loop while waiting for the users actions, like keyboard input, mousebutton clicks and movement, program focus (minimization / maximization).
WRITE MORE ABOUT THE TranslateMessage and DispatchMessage...
// Running the program in an infinite loop (the heart of the application).
// Awaits system messages - software or hardware, and directs it to the callback.
// GetMessage is an API which waits for any message (hardware, keyboard, etc)
// &msg is a structure containing the information about the event (that has occured)
// NULL is receive the messages from all child window process' as well (if you want to use this you have to specify it)
// 0 = message window (start message filter, if larger than 0 it will filter out the first once) eg you type a - b - c, 2 reads from b
// 0 = as above, end message filter
while (GetMessage(&msg, NULL, 0, 0)) {
TranslateMessage(&msg); // Translates your msg into it's ASCII key (eg. A = 65) 65V is sent, machine interprets it to an A
DispatchMessage(&msg); // Dispatches the message to the callback function
}
return ((int)msg.wParam);
In order for your application to handle events the function GetMessage sends all software and hardware messages to a WinProc...
WRITE MORE ABOUT HWND, UINT, WPARAM, LPARAM...
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_KEYDOWN:
switch (wParam)
{
case VK_SPACE:
// First TEXT option is caption of message box, second is the the message, MB_OK = what message box.
MessageBox(hwnd, TEXT("My message"), TEXT("My message"), MB_OK);
break;
}
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
If you compile and run your application you should see a window with your custom caption, a black background and not much else, like in the image below.
And just like in the previous example using GLUT, you can remove the console window by going into Project > <%YOUR_PROJECT_NAME%> Properties under Linker > System > SubSystem and setting Windows (/SUBSYSTEM:WINDOWS).
If you want to compile your project into a spesific folder you can customize your Build Options under <%YOU_PROJECT_NAME%> Properties > General > Output Directory. The default setting is $(SolutionDir)$(Platform)\$(Configuration)\, but you can change it to something like $(SolutionDir)\bin\$(ProjectName)
See the MSDN documentation on MSBuild Macros for more details.
Remember to add tests if the HWND and other functions was created correctly. Eg. if(!CreateWindow() {...}
Append some notes on #pragma comment(lib, "opengl32.lib"); and #define _WINNT_WIN32 0x500 + #define WIN32_MEAN_AND_LEAN.
There are two types of rendering - online and offline rendering. Games playing online is rendering on the fly, called Immidiate Mode Rendering.
There is also offline rendering, like pre-rendered assets such as WMV files used in cutscenes. This type of rendering is known as offline rendering.
Immidiate Mode Rendering uses OS spesific type for rendering, so the implementation of it is up to the OS developers and manufactorers of graphics card drivers.
The example below displays text in immidiate mode in a Windows application.
Note that this code goes in the WndProc function, at the top, before the switch-statement.
TCHAR str[255] = L"Hello, World!"; // L"" stands for long char
HDC hdc; // Handle to the device context
RECT rc; // Handle to the client area
This code goes inside the switch-statement
// This is only called by the OS
// UpdateWindow is the first time WM_PAINT is called
// If not defined, it calls the default call
case WM_PAINT:
GetClientRect(hwnd, &rc); // You are grabbing the physical client area (excluding titlebar, statusbar, etc)
// & = passing the address of the rect to the func, paints left, top,right, bottom
// MessageBox or anything can't be called from here (program will crash)
hdc = GetDC(hwnd); // DC = Device Context (This is calling the painter, there are many given by the OS)
SetBkColor(hdc, RGB(0, 0, 0)); // This is the background color that is printed (black) (painters colorbucket)
SetTextColor(hdc, RGB(0, 255, 0)); // This is the text color
DrawText(hdc, str, -1, &rc, DT_SINGLELINE | DT_CENTER | DT_VCENTER); // param: the context, the str, the amount of text to print (-1 is all), &rc is where to print, DS_Singleline and rest is where to print (hor, vert)
ReleaseDC(hwnd, hdc); // Release the painter
break;
If you get a LNK2019 error when compiling remember to change SubSystem to WINDOWS.
OpenGL uses the simple context to paint its context
Add this to the WndProc
TCHAR str[255] = L"Hello, World!";
HDC hdc; // Handle to device context
RECT rc; // The handle to the client area
PAINTSTRUCT ps; // This is a list of brushes (structs of brushes)
// This is only called by the OS
case WM_PAINT:
GetClientRect(hwnd, &rc); // You are grabbing the physical client area (excluding titlebar, statusbar, etc)
// & = passing the address of the rect to the func, paints left, bottom, right, top
// MessageBox or anything can't be called from here (program will crash)
hdc = BeginPaint(hwnd, &ps); // This is the context OpenGL uses
SetBkColor(hdc, RGB(0, 0, 0)); // This is the background color that is printed (black) (painters colorbucket)
SetTextColor(hdc, RGB(0, 125, 125)); // This is the text color
DrawText(hdc, str, -1, &rc, DT_SINGLELINE | DT_CENTER | DT_VCENTER); // the context, the str, the amount of text to print (-1 is all), &rc is where to print, DS_Singleline and rest is where to print (hor, vert)
EndPaint(hwnd, &ps); // This is called to destroy the OpenGL context
break;
So far all our requests to do an application window repainting via WM_PAINT has been handled automatically by the OS, but what if there was a need to handle a call to WM_PAINT ourself?
Example: The programmer wants to repaint the background of the application window when the user presses R, G, B etc. Each key indicates a color to be set as the new background color of the client area of the window.
Inside the WndProc function you declare an instance of HBRUSH and a static int variable to handle user defined messages to be sent to WM_PAINT.
...
HBRUSH hbrush = NULL;
static int keyPressed;
Below the WM_KEYDOWN case you add another case called WM_CHAR.
Inside there your task is to change the background to the given color based on the users input: G for GREEN, B for BLUE, M for MAGENTA, Y for YELLOW, K for WHITE, W for BLACK, O for ORANGE
...
// Alternative to WM_KEYDOWN (if you need to handle lowercase and capital chars)
case WM_CHAR:
// This handles keyPressed
switch (wParam)
{
case 'r':
keyPressed = 1;
break;
case 'g':
keyPressed = 2;
break;
case 'b':
keyPressed = 3;
break;
case 'm':
keyPressed = 4;
break;
case 'y':
keyPressed = 5;
break;
case 'k':
keyPressed = 6;
break;
case 'w':
keyPressed = 7;
break;
case 'o':
keyPressed = 8;
break;
}
// InvalidateRect is a built-in function that calls WM_PAINT for you!
// @param: handle to window, what rectangle to print in (NULL is whole screen), erase whole background?.
InvalidateRect(hwnd, NULL, TRUE);
break;
Your application will now handle single keyboard inputs and direct them to the InvalidateRect function, which in term calls the WM_PAINT for you.
To process the request of changing the background color you place the following switch-case inbetween the calls to BeginPaint(...) and EndPaint(...) inside WM_PAINT.
...
switch (keyPressed)
{
case 1:
hbrush = CreateSolidBrush(RGB(255, 0, 0));
break;
case 2:
hbrush = CreateSolidBrush(RGB(0, 255, 0));
break;
case 3:
hbrush = CreateSolidBrush(RGB(0, 0, 255));
break;
case 4:
hbrush = CreateSolidBrush(RGB(255, 0, 255));
break;
case 5:
hbrush = CreateSolidBrush(RGB(255, 255, 0));
break;
case 6:
hbrush = CreateSolidBrush(RGB(255, 255, 255));
break;
case 7:
hbrush = CreateSolidBrush(RGB(0, 0, 0));
break;
case 8:
hbrush = CreateSolidBrush(RGB(255, 165, 0));
break;
}
FillRect(hdc, &rc, hbrush);
DeleteObject(hbrush);
...
GetMessage API checks all hardware and input events.
GAME LOOP
while(1)
{
if (ApiWhichChecksTheEvents())
{
if (msg == WM_QUIT)
{
// Quit the window
}
else
{
TranslateMessage(...);
DispatchMessage(...);
}
}
else
{
if (WindowIsActive)
{
RenderingFunctions();
}
else
{
// Do Nothing
}
}
}
NOTE: If the user continuously holds 'w' to run forward there is a nanosecond pause between each input where the rendering is processed.
To do a fullscreen window we need to remove the WS_OVERLAPPEDWINDOW state to allow the application to cover the entire client area without the application statusbar.
Before we enter fullscreen we also need to store the windows current position and state, so it can be passed back to the application if it's reset back to window mode.
We write our function to handle the toggeling between window modes by either declaring a function prototype at the top, or write the function before our main.
void toggle_fullscreen(void);
We also have to declare variables that stores the data passed to the window on state change, so we declare the following variables in the global namespace:
HWND gHwnd;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsFullscreen = false;
Instanciate the global window handle (gHwnd) inside the main below CreateWindow as:
gHwnd = hwnd;
Then we implement the toggle_fullscreen function:
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = { sizeof(MONITORINFO) };
dwStyle = GetWindowLong(gHwnd, GWL_STYLE); // GetWindowLong retrieves the style (or other info) of the specified window
// If dwStyle (bitwise) and WS_OVERLAPPEDWINDOW is true... (both contains WS_OVERLAPPEDWINDOW)
if (dwStyle & WS_OVERLAPPEDWINDOW) {
// Retrieves the show state and the restored, minimized, and maximized positions of the specified window
bIsWindowPlacement = GetWindowPlacement(gHwnd, &wpPrev); // GetWindowPlacement retreives the position of current active window
hMonitor = MonitorFromWindow(gHwnd, MONITORINFOF_PRIMARY); // Tell the OS to give the handle to the (primary) monitor the graphicscard is connected to
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
// This function changes an attribute of the specified window
SetWindowLong(gHwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW); // Remove the WS_OVERLAPPEDWINDOW state
SetWindowPos(gHwnd, HWND_TOP, // Assign the monitor coords to the SetWindowPos
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
// Returns the previous version of the window
SetWindowLong(gHwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(gHwnd, &wpPrev);
SetWindowPos(gHwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Lastly we handle switching between fullscreen mode in the WM_KEYDOWN inside our WinProc:
...
case 'f':
case 'F':
toggle_fullscreen();
break;
}
...
This example switches modes when the user presses f or F, but in most cases this would be done when the user presses alt + enter.
case WM_SYSKEYDOWN:
// Toggle between fullscreen and window mode
if (HIWORD(lParam) && KF_ALTDOWN) {
if (LOWORD(wParam) == VK_RETURN) {
toggle_fullscreen();
}
}
NOTE: The above code gives a Windows Default Sound when switching in and out of fullscreen using the key command alt+enter, so as an exercise try to implement the resizing in WM_SYSCHAR, or use an Keyboard Accelerator.
Keyboard Accelerator sends their result to WM_COMMAND or WM_SYSCOMMAND, so see if any of those fixes the bug. (Both are wrong, WM_COMMAND is for IDOK or IDD_MENUITEMS, and WM_SYSCOMMAND handles SC_SCREENSAVER and SC_MONITORPOWER)
The (probably) correct solution is to use WM_SYSKEYDOWN since that is for system keys like F10 or ALT. Test with InitCommonControlsEx and adding Bits 29 in WM_SYSKEYDOWN to see if that solves it!
Mention AdjustWindowRect() and how this prevents correct scaling in windowed mode...
OpenGL consists of two states, first state is the initialization part and the second one is the updating part.
ChoosePixelFormat(), SetPixelFormat() are both done on the OS side and gives you a Device Context (HDC).
wglCreateContext(), wglMakeCurrent() gives the Device Context (g_hdc) to the GPU, which gives you the Rendering Context (HGLRC). We need to include the windows library and the OpenGL library to be able to work with win32 and using the Wiggle library functions.
#include <windows.h>
#include <GL/gl.h>
int initialize(); // Function prototype
HWND g_hwnd; // Global Handle
HDC g_hdc = NULL; // Device Context
HGLRC g_hrc = NULL; // Rendering Context
Remember to initialize the global handle (g_hwnd) to the hwnd inside the WinMain by adding:
g_hwnd = hwnd
We implement the initialization of OpenGL in the newly created initialize function:
int initialize()
{
PIXELFORMATDESCRIPTOR pfd; // Decribes the pixelformat to use while rendering
// LAYERPLANEDESCRIPTOR lpd; // Contains the palette for- and background layers
int iPixelFormatIndex; // The index given by the OS
// int iLayerPlane; // Initialize similar to PIXELFORMATDESCRIPTOR
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR)); // Giving the pixelformat descriptor to the system (what pixel format you are using)
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR); // Size of the pfd structure to the PIXELFORMAT (init the structure)
pfd.nVersion = 1; // When you are dealing with an OS its doesn't give the newest OGL, but your graphics card driver does!
// This gives the basic version, then it gets replaced by the newest version (driver implementation)
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL; // Tell the draw to the window or device surface, and that it supports using OpenGL
pfd.iPixelType = PFD_TYPE_RGBA; // The pixeltypes should be red, green, blue, alpha
pfd.cColorBits = 32; // The highest number of color bitplanes for each color buffer (8 * 4 = 32)
pfd.cRedBits = 8; // 8 bits red channel (flexiblity to change the color type)
pfd.cGreenBits = 8; // 8 bits green channel (if one is large the color gets higher presidents)
pfd.cBlueBits = 8; // 8 bits blue channel
pfd.cAlphaBits = 8; // 8 bits alpha channel
// pfd.bReserved // Specifies the number of overlay and underlay planes.
// Bits 0 through 3 specify up to 15 overlay planes and bits 4 through 7 specify up to 15 underlay planes.
// Layers need a call to BOOL wglRealizeLayerPalette(HDC hdc, int iLayerPlane, BOOL bRealize); before use...
g_hdc = GetDC(g_hwnd); // Gets the Device Context of the OS
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd); // Tell the OS the give pixelformat with the assigned vars (gives the index of the pixelformat)
// It might give you the approximate of the declared vars / desired format
if (iPixelFormatIndex == 0) {
return -1;
}
// Give the PixelFormat struct data to my device context
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
// iLayerPlane = wglDescribeLayerPlane(g_hdc, iPixelFormatIndex, iLayerPlane, sizeof(LAYERPLANEDESCRIPTOR), &lpd);
// if (iLayerPlane == FALSE) {
// return errno;
// }
// if (wglDescribeLayerPlane ( HDC hdc, int iPixelFormat, int iLayerPlane, UINT nBytes, LPLAYERPLANEDESCRIPTOR plpd ) {
// }
// wgl (Wiggle) operates as a brigde between the OS (CPU side) and OpenGL (GPU side)
g_hrc = wglCreateContext(g_hdc); // I want a rendering context like g_hdc, given to g_hrc (which happens on the GPU side using the wgl))
if (g_hrc == NULL) {
return -3;
}
// Make the current context as g_hrc (rendering context)
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
}
There is also a GetDCEx(hwnd, hrgn, DCX_WINDOW | DCX_INTERSECTRGN | 0x10000); to further control how the device context is handles...
Then we implement a game loop to handle the OS messages and OpenGL rendering inside WinMain:
bool bIsRunning = true;
while (bIsRunning == true) {
// PeekMessage doesn't wait for messages - if there is no messages it will run the else-clause.
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
// Handle your rendering context here!
}
}
To finish the initialize() function we simply add:
glClearColor(0.0f, 0.0f, 1.0f, 1.0f); // Clears the buffer with this value
return 0;
It's time to create a remaining functions to setup an instance of OpenGL. First declare the function prototypes globally:
void resize(int, int);
void display(void);
void uninitialize(void);
We only add a basic implementation of each function, but this will in the end give a working application displaying a white triangle.
NOTE: only works using Nvidia, AMD needs to add a shader (Is this true for legacy OGL?)).
First we add the function to resize the viewport of our application.
void resize(int w, int h)
{
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
}
Our rendering loop (here displaying legacy OpenGL) is only clearing the screen to the color variable that was set in the initialize() with glClearColor() and adding a simple triangle. Notice that we use glFlush() since this program is currently using a singlebuffer.
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT); // Takes the latest value from glClearColor and clear the buffers with that
// Rendering is added here
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_TRIANGLES);
glVertex2f(0.0f, 1.0f);
glVertex2f(-1.0f, -1.0f);
glVertex2f(1.0f, -1.0f);
glEnd();
glFlush(); // This is needed in a singlebuffer program (it flushed the buffer between each interation)
}
It is also good practise to clean up the initialization once we are done (user quits the program), so there wount be any resouces tide up.
void uninitialize(void)
{
if (bIsFullscreen == true)
{
// This code is identical to the code in toggle_fullscreen()
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
Then in the WinMain you can remove the UpdateWindow and add the following code to do the OpenGL initialization. Note that the call to initialize has to come after initializing the global handle (g_hwnd).
// UpdateWindow(hwnd); // This is called by the OS automatically
g_hwnd = hwnd;
int result = initialize();
Then it's time to add the new functions to the WinProc. Remove the declarations in the WinProc and the code in the WM_PAINT and add the following code to the switch(uMsg)-statement:
// This is only called by the OS (first when the window is created)
case WM_PAINT:
display(); // This is used in a singlebuffer program (This will not update the window each frame - instead place the display call in the game loops else-clause!)
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam)); // This takes the care by the OS and takes the LOWORD and HIWORD values of the screensize (Rewrite this into an understandable sentence)
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
If you haven't added opengl32.lib under Linker > Input > Additional Dependencies you can add this line at the top below your #include declarations:
#pragma comment(lib, "opengl32.lib")
Now compile and run your program to see a white triangle on a blue background. (Again note that this will only work on a computer with an Nvidia card!)
Comparing the creation of an application window to the earlier version using freeglut we see the similarilies in the comments in the sourcecode below.
glutInit(&argc, argv); // Calls WinMain internally
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGBA); // Calls the PFD struct internally
glutInitWindowSize(800, 600); // Calls CreateWindow internally
glutInitWindowPosition(100, 100); // Also mapped to CreateWindow
glutCreateWindow("C-OpenGL first triangle"); // Also mapped to CreateWindow
initialize(); // Gives a call to glClear color (glClear takes this call to the framebuffer)
glutDisplayFunc(display); // else-part of the messageloop
glutReshapeFunc(resize); // Similar to WM_SIZE
glutKeyboardFunc(keyboard); // Similar to WM_KEYDOWN
glutMouseFunc(mouse); // ...
glutMainLoop(); // Handles the main game loop
This writeup implements a modern OpenGL context adding more to HMONITOR and further improving #pragma and _DEBUG flag: https://subscription.packtpub.com/book/business-and-other/9781800208087/1/ch01lvl1sec04/creating-the-application-class https://subscription.packtpub.com/book/business-and-other/9781800208087/1/ch01lvl1sec07/creating-a-window https://github.com/PacktPublishing/Hands-On-Game-Animation-Programming/blob/master/AllChapters/Code/WinMain.cpp
Here is the cleaned up sourcecode of a win32 application running OpenGL:
#include <windows.h>
#include <GL/gl.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT,
CW_USEDEFAULT,
CW_USEDEFAULT,
CW_USEDEFAULT,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
// Whatever you want to render - do it here!
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_KEYDOWN:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_PAINT:
display();
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
return 0;
}
void resize(int w, int h)
{
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_TRIANGLES);
glVertex2f(0.0f, 1.0f);
glVertex2f(-1.0f, -1.0f);
glVertex2f(1.0f, -1.0f);
glEnd();
glFlush();
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Fra Innføring i grafikk programmering:
GL_MODELVIEW matrisen beskriver retning og hvor kameraet er: transformerer fra object space til eye space.
GL_PROJECTION matrisen beskriver hvordan kameraet ser (”linsen”)
transformerer mellom eye space og clip space
Clipping Area refers to the area that can be seen (ie. captured by the camera), measured in OpenGL coordinates, and is define as { -1, -1 }, { 1, 1 }. In OpenGL coordinates { 0, 0 } is center of the screen.
Viewport refers to the display area on the window (screen), which is measured in pixels in screen coordinates (excluding the title bar).
A viewport is where the rendering will occure, and in that particular viewport our screenspace will be shown. If your native resolution is 1920x1080, your viewport defines the drawing size of the graphics to be displayed in the given screensize.
Framebuffer is a space in the VRAM where the final viewport is stored. You can create your own, even multiple, framebuffer(s) as needed.
Objects will distorted if the aspect ratio of the clipping area and viewport is different.
OpenGL defines it's viewport using glViewport(x, y, width, height), where the starting x and y - { 0, 0 } - is in the lower left corner, up to { width * height } in the upper right corner.
To define a viewport you have to declare a width and a height variable (just put in it in the global space for now)
int width;
int height;
You also have to store the width and height variables in the WM_SIZE so you can manipulate the viewport with it:
case WM_SIZE:
...
width = LOWORD(lParam);
height = HIWORD(lParam);
break;
This example edits the viewport to draw your triangle in certain parts of the screen. Shift what screenspace to draw the triangle from the previous lesson in by using the numpad keys (or the numbers keys).
* Numpad0 should draw the triangle on the entire screen (like in the previous lesson) * Numpad1 should draw the triangle in the lower left corner, covering only 1/4 of the screen * Numpad2 should draw the triangle in the lower right corner, covering 1/4 of the screen * Numpad3 should draw the triangle in the upper left corner, covering 1/4 of the screen * Numpad4 should draw the triangle in the upper right corner, covering 1/4 of the screen * Numpad5 should draw the triangle on the right side of the screen, covering the entire right half * Numpad6 should draw the triangle on the left side of the screen, covering the entire left half * Numpad7 should draw the triangle on the upper side, covering the entire upper half * Numpad8 should draw the triangle on the lower side, covering the entire lower half * Numpad9 should draw the triangle in the center of the viewport, covering 1/4 of the screen
Place this code in the WM_KEYDOWN:
case WM_KEYDOWN:
switch (wParam)
{
case VK_NUMPAD0:
// Full screen
glViewport(0, 0, (GLsizei)width, (GLsizei)height);
break;
case VK_NUMPAD1:
// Lower left corner
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD2:
// Lower right corner
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD3:
// Upper left corner
glViewport(0, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD4:
// Upper right corner
glViewport((GLsizei)width / 2, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD5:
// Whole right side
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD6:
// Whole left side
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD7:
// Whole upper half
glViewport(0, (GLsizei)height / 2, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD8:
// Whole lower half
glViewport(0, 0, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD9:
// Centered
glViewport((GLsizei)width / 4, (GLsizei)height / 4, (GLsizei)width / 2, (GLsizei)height / 2);
break;
}
break;
NOTE: If you are adding this code to the previous example you'll get an error in the WM_KEYDOWN case for toggeling fullscreen because VK_NUMPAD6 has the same ASCII symbol value as the letter f.
This example will demonstrate how to center the window on startup and how to add doublebuffering to your application.
Up to OGL 3.0 the common way to do projected rendering was using the OpenGL Utility (GLU) library, setting the worldspace / modelspace with gluPerspective and eyespace with gluLookAt
NOTE: This has been deprecated in in the modern pipeline starting from OGL 3.0 and removed entirely from OGL 3.1.
Start by including the header file:
...
#include <GL/glu.h>
...
Now we can define the metric' of the window positioning at start up by adding these variables to your WinMain after RegisterClassEx:
...
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
...
NOTE: Try rewriting these variables using the CREATESTRUCT struct found in windows.h
Then replace the CW_USEDEFAULT calls in CreateWindow by:
...
x,
y,
sWindowWidth,
sWindowHeight,
...
Go to your initialize function and append the PFD_DOUBLEBUFFER command to the pfd.dwFlags
...
pfd.dwFlags = ... | PFD_DOUBLEBUFFER
...
Now you don't need SW_PAINT in your WinProc, but instead the draw calls will come from display(), once you added it to the main rendering loops else condition.
Remember to add an initial call to resize() at the end of initialize() so the projection is draw to screen.
Next we modify the resize() to handle our new projection based rendring, so we begin by ensuring that the projection isn't divided by 0:
...
if (h == 0) {
h = 1;
}
...
Below the call to glViewport set the model projection, translate the view to it's identity matrix and add the viewport using gluPerspective:
...
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
...
NOTE: Setting the gluPerspective()' zNear could significantly improve the depth calculations.
NOTE 2: When the perspective is set (eg. 0.1f, 100.0f) your z-axis' (eg. glVertex3f) needs to be bigger than the zNear-value for gluPerspective to draw it to the screen!
When your application is doublebuffering there's no need to flush the display(), since the back- and front buffers will automatically swap and start rendering the next frame (offscreen), once that's ready.
Replace glFlush() with the wingdi.h built in function SwapBuffers, which takes the rendering context (g_hdc) as a single param.
SwapBuffers(g_hdc);
And now your windows start up position is in the middle of the screen (minus the size of the application system bar).
The gluPrespective() transformation of view... (This needs some improvement)
- Local Coordinate System → World Coordinate System → View Coordinate System
- Adds the frustum in the Clip Coordinate System, which is turned into Normalized Device Coordinate System as rendered on the screen in the viewport seen at the 2D window.
When you compile a program it goes through a linker, compiler and an assembly stage, then your OS can load the application from your harddisk into your random access memory (RAM) and execute it.
The OS will also load the VRAM on the GPU (sending framebuffers etc). Each vertices on the GPU is display by phosphorus getting ignited in your screen.
Stored in the VRAM on the GPU and holds state of the color-, depth-, stencil- and accumelator buffers.
Lets add our affine body transformations into your display() function:
// Positional transformation glTranslatef(0.0f, 0.0f, -3.0f); // Rotation transformation glRotatef(translationValue, 1.0f, 1.0f, 0.0f); // When all axis' are present it's an arbitrary rotation // Shear transformation glScalef(0.2f, 0.2f, 0.2f); // Scales the model in view to 0.2 // All these are in the model transformation (Translation * Rotation * Shearing)
This example video explains the process in details: https://www.youtube.com/watch?v=q5jOLztcvsM
Once your program is conpiled succesfully it's located on your harddrive, and when you run it it gets loaded into the RAM.
Once it's in the RAM it needs to call some graphics device (GPU). Your OS will call the device driver (controlling the GPU), which allocate the GPUs VRAM.
The VRAM then maps the rendering content to the screen.
You pass the vertex data (as vertices) and your color data into the fixed functional pipeline.
The fixed functional pipeline doesn't give you any control over the pipeline, you only send your data to the fixed functional pipeline. A programmable pipeline contains "holes" for VS, TS, GS and FS, which can be customized by the user.
Projection is used to set up the viewport and clipping boundry, while modelview is used to rotate, translate and scale objects quickly.
The vertices inside glBegin() and glEnd() is considered (interpreted as) an array (in local space). Using the glTranslatef() your displayed object is multiplied by the transformation matrix, given that glMatrixMode(GL_MODELVIEW) is set.
The order of inputs matters: translate, rotate then scale.
When you call GL_MODELVIEW your put the cursor in the center in the local space.
Position gets multiplied by the transform matrix, and we call this World Space Coordinates.
World Space Coordinates gets multiplied by the ViewMatrix and we call this the Eye Space Coordinates.
Golden rule in OpenGL: If you don't implement any camera OGL assignes the camera at { 0, 0, 0 }.
Matrices are the easiest way to represent three dimensional values.
Your Eye Coordinates gets multiplied by your perspective / orthographics view and we call this the Clip Coordinates.
Local Coordinates → World Coordinates → Eye Space Coordinates → Clip Coordinates
Here is a recap of how the Fixed Functional Pipeline works in each stage:
1. Vertex Specification Stage
a) Vertex Data → Vertices
a) Transformation → MODEL_TRANSFORMATION
There are three types of transformation in OpenGL
1) Position Transformation
2) Rotation Transformation
3) Sheer Transformation
Everything in OpenGL is in the form of Matrix.
a) Local Coordinates gets converted to World Space Coordinates
→ Vertices gets multiplied by the transformation functions (glTranslatef, glRotatef, glScalef)
VIEW TRANSFORMATION
→ Camera Matrix
(MODEL - VIEW DUALITY) → Model * View Matrix
Now ModelView Matrix (Eye Space) gets multiplied by the projection Matrix and we call it Clip Space
b) Primitive Assembly → What gemometry gets rendered?
(GL_TRIANGLES)
c) Clipping
Viewport clipping
d) Perspective Divide → Here all the vertices are divided by the w component (we are converting from homogenous to cartesian)
NDC coordinates which are mapped to the screen
e) Viewport Transform → All your things get rendered into that viewport
All the things gets rendered into the viewport
f) Face Culling
You are rendering triangles (not showing the backface)
2. Pixel Specification Stage
a) Pixel Data → color, Textures, Light, Images
Per Pixel operations and unpacking
Texture Assembly → On which geomery should I render the image?
3. Rasterization → Creates the potential Pixel
4. Per Fragment Tests
a) Pixel Ownership → Done by OpenGL automatically
b) Scissor Test → Done by OpenGL automatically
c) Alpha Test → Done by you (You have to enable it)
d) Depth Test → Done by you (You have to enable it)
e) Stencil Test → Done by you (You have to enable it)
f) Blending → Done by OpenGL and you both
g) Dithering → Done by OpenGL
h) Logic Operation → Done by OpenGL
1. Pre-processing of vertices (raw vertices)
2, Processing of vertices
1. Transformation
The vertices are in the local space
Here the transform to Local Space → World Space (Model Transform)
If you add any camera or by defailt gives you the camera in the center
World Space Coordinates gets mulitplied to the view Matrix this is known as Eye Space
Eye Space is transformed to Clip Space.
3. Post processing
Primitive Assembly (GL_TRIANGLES)
Clipping → Viewport clipping
Perspective Divide → NDC Coordinates which are mapped to the screen
Viewport Transform → All the things get rendered into that viewport.
Face Culling → (You are rendering a triangle (not showing the back), this called face culling)
[2, 7] → Cartesian Coordinate System [2, 7, 0] → Homogenous Conversion from homogenious to Cartesian is such as 2/0, 7/0 → this represents inifinity. 2/1, 7/1 →
Vertex → (World Space) Multiplied by Tranformation, Rotate and Scale (order matters!) → Multiplied by camera coordinates (Eye coordinates) → Eye space gets multiplied by (Clipping plane) → Primitive Assembly → Clipping (different from Culling). Viewport clipping happens first (removes (culled!) all vertices outside of the viewport). Perspective divide (w-component of clipspace by NDC(Normalized Device Coordinates).). Then face culling happens (FACE_CULLING). Frustrum culling. Raterization will convert the geometry to potiential pixel (maps your 3D space into 2D), but we have almost no control over it.
View tests are performed (pixelownership tests, scissor tests (what lies inside the screenview), transformation test, alpha tests, depth tests, stencil tests (shadows and more), blending test, dithering test and logical test) and each pixel that passes those tests are rendered to the framebuffer.
After that it's render to the framebuffer
Fra NITH mappa Innføring i grafikk programmering: Vertexes are defined in object space. A coordinate system for all objects is in world space. Camera coodinates is called eye space.
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_KEYDOWN:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -3.0f);
glRotatef(-75.0f, 1.0f, 0.0f, 0.0f);
glLineWidth(1.0f);
// Horizontal bars
float x1 = -2.0f;
float y1 = 2.0f;
glBegin(GL_LINES);
glColor3f(0.48f, 1.0f, 0.48f);
for (float i = -2.0f; i < 2.1f; i = i + 0.2f) {
glVertex2f(x1, i);
glVertex2f(y1, i);
}
glEnd();
// Vertical bars
float x2 = 2.0f;
float y2 = 2.0f;
glBegin(GL_LINES);
glColor3f(0.48f, 1.0f, 0.48f);
for (float j = -2.0f; j < 2.1f; j = j + 0.2f) {
glVertex2f(j, x1);
glVertex2f(j, y2);
}
glEnd();
glLineWidth(1.5f);
// Horizontally centered line
glBegin(GL_LINES);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex2f(-2.0f, 0.0f);
glVertex2f(2.0f, 0.0f);
glEnd();
// Vertically centered line
glBegin(GL_LINES);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex2f(0.0f, -2.0);
glVertex2f(0.0f, 2.0f);
glEnd();
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
The Transformation Pipeline
Object Space: Initial transformation is called Object Space and hosts the objects local transformation.
World Space: To transform the object into the position relative to your world's origin it transforms the vertices using the model transform.
Eye Space: Multiplied by the view transform we get into the Eye Space.
Clip Space: Once the objects are culled acording to being visible on screen gives us the Clip Space.
Normalized Device Coordinates: Multiply the Clip Space by perspective division you get the NDC.
Window Space: This step takes the vertices and performs the rasterization to project the objects that are visible onto the screen.
https://openglbook.com/chapter-4-entering-the-third-dimension.html
Direct State Access for OpenGL 2.1: https://registry.khronos.org/OpenGL/extensions/EXT/EXT_direct_state_access.txt
glLoadIdentity() resets the coordinate system to the center which can make for independent movement of objects.
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
In OpenGL there is two object (camera and objects). glMatrixMode(GL_PERSPECTIVE) handles matrices used by perspective- or orthogonal transformation. glMatrixMode(GL_MODELVIEW) handles the matrices used by model-view to transform your objects to view coordiante space (or camera space).
In order to set up a camera you have to initialize OpenGL depth testing and set up the glDepthFunc() and tell it how you want objects to be culled.
glEnable(GL_DEPTH_TEST) enables depth testing to determine which object is closest to the screen.
glDepthFunc(GL_LEQUAL) tells OGL to perform depth testing to find which object to draw (based on object culling) by finding which is closer to the screen.
When using gluLookAt() the camera will displace and reposition to the view origin if the position exceeds the eye coordinates, eg. if the viewer goes beyond the set z-axis making it look like the camera is flipping in the z-direction.
Image an camera positioned like in the example below. If the user traverse to beyond EyeZ -5.0f, the camera flips around and will focus in the opposite direction.
gluLookAt(
0.0f, 0.0f, 0.0f, // Camera position
0.0f, 0.0f, -5.0f, // Camera view position (focal point)
0.0f, 1.0f, 0.0f // Camera up-vector
);
gluLookAt() goes in the display() function
Here is an example of drawing a grid:
glLineWidth(1.0f);
// Horizontal bars
float x1 = -2.0f;
float y1 = 2.0f;
glBegin(GL_LINES);
glColor3f(0.48f, 1.0f, 0.48f);
for (float i = -2.0f; i < 2.1f; i = i + 0.2f) {
glVertex2f(x1, i);
glVertex2f(y1, i);
}
glEnd();
// Vertical bars
float x2 = 2.0f;
float y2 = 2.0f;
glBegin(GL_LINES);
glColor3f(0.48f, 1.0f, 0.48f);
for (float j = -2.0f; j < 2.1f; j = j + 0.2f) {
glVertex2f(j, x1);
glVertex2f(j, y2);
}
glEnd();
glLineWidth(1.5f);
// Horizontally centered line
glBegin(GL_LINES);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex2f(-2.0f, 0.0f);
glVertex2f(2.0f, 0.0f);
glEnd();
// Vertically centered line
glBegin(GL_LINES);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex2f(0.0f, -2.0);
glVertex2f(0.0f, 2.0f);
glEnd();
TODO: Drawing a grid needs further explanation and detailing how to draw it as an object filled with color / texture.
x = r sin θ
y = r cos θ
This calculates y
sin θ = y / r
This calculates x
y = r sin θ
cos θ = x / r
#include <math.h>
...
GLfloat r = 0.26f; // Radius
GLfloat theta = 0.0f; // Angle
glBegin(GL_POINTS);
for (float i = 0.0f; i < 360.0f; i = i + 0.02f) {
glVertex2f(r * cos(i), r * sin(i));
}
glEnd();
This is also known as an inscribed circle. The formula to calculate the incircle (so it touches the triangle sides) is: ...
diameter = srt((x2-x1)^2 + (y2-y1)^2)
Incircle (in Hindi): https://www.youtube.com/watch?v=s4QeKQUgh0A
https://mathworld.wolfram.com/Incircle.html
https://www.quora.com/What-is-the-radius-of-the-incircle-of-a-triangle-with-sides-of-18-24-30-cm
Read more about polar coordiante here: https://mathinsight.org/polar_coordinates
Normal maps in Barycentric coordinates (terrain texture and destructions +++): https://www.youtube.com/watch?v=JX7xlFAJ0Ds
Improve terrain generation with CDLOD: https://www.youtube.com/watch?v=AT7h8pYJRiw
AI personalities: youtube.com/watch?v=q7E1N-fJnrA
Read Paul's Online Notes
This math example could be used in rigging a scene to generate destructable objects. http://fire-face.com/destruction/
To calculate the incircle we need to create some functions to do the calculation and declare some variables to use. Add the following function prototype and variables in the global scope:
#define _USE_MATH_DEFINES 1
#include <math.h>
...
void circle_radius(void);
void draw_circle(void);
...
float a = 4.0f;
float b = 5.0f;
float c = 5.0f;
float semiperimeter, perimeter, area, radius, xoffset, yoffset;
...
Then we write the functions to calculate the incircle radius, area and ...
void circle_radius(void)
{
float side1 = sqrt(a);
float side2 = sqrt(b);
float side3 = sqrt(c);
perimeter = side1 + side2 + side3;
semiperimeter = perimeter / 2.0f;
area = sqrt(semiperimeter * (semiperimeter - side1) * (semiperimeter - side2) * (semiperimeter - side3));
radius = area / semiperimeter; // This gives the triangles incircle area
xoffset = ((0.0f * side1) + (-1.0f * side2) + (1.0f * side3)) / perimeter;
yoffset = ((1.0f * side1) + (-1.0f * side2) + (-1.0f * side3)) / perimeter;
}
void draw_circle(void)
{
circle_radius();
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -3.0f);
glBegin(GL_LINE_LOOP);
for (float angle = 0.0f; angle < 2.0f * M_PI; angle = angle + 0.001f)
{
glVertex2f(radius * cos(angle) + xoffset, radius * sin(angle) + yoffset);
}
glEnd();
}
Then we can include the new functions in the display():
glBegin(GL_LINES);
// Lines draw between two points
glVertex2f(0.0f, 1.0f);
glVertex2f(-1.0f, -1.0f);
glVertex2f(-1.0f, -1.0f);
glVertex2f(1.0f, -1.0f);
glVertex2f(1.0f, -1.0f);
glVertex2f(0.0f, 1.0f);
glEnd();
draw_circle();
https://www.calculatorsoup.com/calculators/geometry-plane/distance-two-points.php
http://www.gogeometry.com/problem/p193_area_of_a_triangle_semiperimeter_inradius.htm
Add the following to the display() function:
static GLfloat tri_movement = 2.0f;
static GLfloat tri_rising = -2.0f;
static float tri_rotate = 0.0f;
static bool tri_centered = false;
glTranslatef(tri_movement, tri_rising, -3.0f);
glRotatef(tri_rotate, 0.0f, 1.0f, 0.0f);
...
if (tri_movement > 0.0f) {
tri_movement -= 0.001f;
}
else {
tri_centered = true;
}
if (tri_rising < 0.0f) {
tri_rising += 0.001f;
}
if (tri_centered != true) {
tri_rotate += 0.25f;
}
else if (tri_rotate > 360.0f) {
tri_rotate = 0.0f;
}
...
glLoadIdentity();
static GLfloat line_movement = 2.0f;
glTranslatef(0.0f, line_movement, -3.0f);
glBegin(GL_LINES);
glVertex2f(0.0f, 1.0f);
glVertex2f(0.0f, -1.0f);
glEnd();
if (line_movement > 0.0f)
line_movement -= 0.001f;
Then edit the draw_circle() to include these lines of code:
static GLfloat cir_movement = -2.0f;
static GLfloat cir_rising = -2.0f;
static GLfloat rotate = 0.0f;
static bool centered = false;
...
glTranslatef(cir_movement, cir_rising, -3.0f);
glRotatef(rotate, 0.0f, 1.0f, 0.0f);
...
if (cir_movement < 0.0f) {
cir_movement += 0.001f;
}
else {
centered = true;
}
if (cir_rising < 0.0f)
cir_rising += 0.001f;
if (centered != true) {
rotate += 0.25f;
}
else if (rotate > 360.0f) {
rotate = 0.0f;
}
This will cause the three individual objects (trianle, line and circle) to rotate until they connect in the middle, then come to a full stop facing the camera.
Discussion regarding positioning objects: https://www.reddit.com/r/opengl/comments/pmrcjb/how_to_rotate_houses_properly/
It's time to start drawing 3D object using GL_TRIANGLES.
Here is a working example of how to draw a multicolored triangle and a square and position it to the left and the right of the screen.
// LESSON 21
glTranslatef(-2.0f, 0.0f, -10.0f);
static GLfloat rotation = 0.05f;
glRotatef(rotation, 1.0f, 1.0f, 0.0f);
glBegin(GL_TRIANGLES);
// Front
glColor3f(1.0f, 0.0f, 0.0f); // Red
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0, 1.0f);
// Back
glColor3f(0.0f, 1.0f, 0.0f); // Green
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
// Left
glColor3f(0.0f, 0.0f, 1.0f); // Blue
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
// Right
glColor3f(1.0f, 1.0f, 0.0f); // Yellow
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glEnd();
// Bottom of the triangle
glBegin(GL_QUADS);
// Bottom
glColor3f(0.0f, 1.0f, 1.0f); // Cyan
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glEnd();
glLoadIdentity();
glTranslatef(2.0f, 0.0f, -10.0f);
static GLfloat quad_rot = 0.05f;
glRotatef(quad_rot, 1.0f, 1.0f, 0.0f);
glBegin(GL_QUADS);
// Front
glColor3f(1.0f, 0.0f, 0.0f); // Red
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
// Left
glColor3f(0.0f, 1.0f, 0.0f); // Green
glVertex3f(1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
// Back
glColor3f(0.0f, 0.0f, 1.0f); // Blue
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
// Right
glColor3f(1.0f, 1.0f, 0.0f); // Yellow
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
// Top
glColor3f(0.0f, 1.0f, 1.0f); // Cyan
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
// Bottom
glColor3f(1.0f, 1.0f, 1.0f); // White
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glEnd();
rotation += 0.05f;
quad_rot += 0.05f;
Lets start the process of loading in bitmap images to texturize our scenary using the built in WIN32 BITMAP set of functions. Start by making a function prototype for the texture loading function and declare a variable to store the image in:
bool load_texture(GLuint*, TCHAR[]);
...
GLuint texture;
Then we implement the load_texture() along the rest of the custom functions in our project:
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
bitmap = LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
Make sure that you add #include "texture.h" in your main.c file, and that you enable GL_TEXTURE_2D and to load the texture into memory by placing the following code in your initialize() function for simplicity:
glEnable(GL_TEXTURE_2D);
...
load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE));
Then create a new file called texture.h (or something similar) in the Header Files section found in Solution Explorer. Right-click on Solution Explorer (on the folder called Header Files), select Add → New Item... and choose Header File (.h) in the file dialog. This is where we tell Windows what resources we intend to use, so add the following line into the newly created file:
#define IDBITMAP_TEXTURE 101
You might experience error messages if you don't add whitespace after the #define declaration, if you get a RC1004 - unexpected end of line error when compiling, try adding one or more new lines in the texture.h file
WIN32 uses a resource system (More info) to identify the resources your program uses, details like ICON, MENU / SUBMENU, BITMAP, etc. so we also need to create a resource.rc (usually placed in the Resource Files in Solution Explorer).
Right-click on the Resource Files folder and select Add → New File. In the filedialog navigate to Resource under Visual C++ on the righthand side. This time choose Resource File (.rc) and name it resource.rc.
To edit the content of the .rc-file right click on it and select Open With... and select C++ Source Code Editor
#include "texture.h"
IDBITMAP_TEXTURE BITMAP Smiley.bmp
If you are using a filename containing spaces you need to add the filename inside "", eg. "Smiley faces.bmp"
After that we draw a simple quad in the display() function and add the texture coordinates onto it using glTexCoord2f() with the correct texture coordinates (ranging from 0 → 1, where {0, 0} is the lower, left corner of the texture).
glBegin(GL_QUADS);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(-1.0f, 1.0f);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(1.0f, -1.0f);
glEnd();
When you compile the project it will now display a quad with your texture on screen.
If you use an image found online (eg. a JPG file, even if you saved it as a BMP) you might experience problems loading the image and your texture will not display at all.
If you don't see a texture on your quad try to convert the image you are using to the correct format with the right extension (.BMP) using your preferred image editor and try again!
The lower left corner of the texture is { 0, 0 } in OpenGL. X direction is the U-coordinate, and Y direction is V-coordinate.
If you want to load multiple textures you declare a new instance in resource.h pr texture and give multiple calls to load_texture() in the initialize function. Then you can assign them to each primitive or model using glBindTexture(GL_TEXTURE_2D, name_of_texture).
Using resource.rc all the data you use will be embedded into the executable file.
WM_CREATE automatically calls this case when you start your program. To initialize playing audio in Windows you need to include the header mmsystem.h, and add the library winmm.lib using a pragma. You also need to add the files you intend to load into the header file (eg.) texture.h and resource.rc
#define IDSOUND 102
IDSOUND WAVE audio.wav
#include <mmsystem.h>
...
#pragma comment(lib, "winmm.lib")
Once you have set up the program to load audio you can add it to you WinProc in a WM_CREATE case like this:
case WM_CREATE:
PlaySound(MAKEINTRESOURCE(IDSOUND), NULL, SND_NODEFAULT | SND_RESOURCE | SND_ASYNC);
break;
As with BMP files the audio is embedded into the executable file (Note: the file size of you executable will be equally much bigger)
More info on reading and writing WAV files in MSDN: https://docs.microsoft.com/en-us/windows/win32/medfound/tutorial--decoding-audio
To reset the colors to use (texture is nothing but a color) the original color scheme for the texture you can set the glColor3f(1.0f, 1.0f, 1.0f) to white, which doesn't blend with the original texture color.
UV coords are always in between { 0, 1 }, and will never be negative.
Following the clockwise direction.
(0, 1) (1, 1)
|
1
|
(0, 0) -- 1 -- (1, 0)
Matrix is not a formula
Programmers should know how Matrixes and handled internally. Matrix can have two elements, the row and column
The order of matrixes matter! If you multiply a translation matrix with a scale, or scale by a translation matrix will be different.
Matrix makes representing 3D coordinates easy.
Can be extended to represent n number of matrixes.
Can represent inifinity. (image a railway track going into the distance)
Your screen is representing cartesian coords.
OpenGL follows two stacks - View Stack <- (Projection Stack) and (Transformation Stack) -> Model Stack
Projection stack stores the perspective, orthographic or lookAt matrix
The second one stores the translation, rotation and scaling.
OpenGL inplements it's own stack which represents both.
Your GPU has it's own dedicated texture memory.
Cartesian coordiantes: [2, 3]
Homogenous coodinates: [2, 3, 1] = (x, y & z)
Cartesian has no extra dimension, homogenous has an extra dimension - either 0 or 1.
x / w = 2 / 1 = 2
y / w = 3 / 1 = 3
x = x / 0 = → infinity
y = y / 0 = → infinity
_ _
m11 | 3.0 1.0 1.0 | ↑
m21 | 1.0 1.0 1.0 | Column
m31 |_1.0 1.0 -0.5_| ↓
← Row →
glTranslate(3.0f, 1.0f, -0.5f);
glRotate(angle, 1.0f, 0.0f, 0.0f);
glScale(5.0f, 5.0f, 5.0f);
1) Row
2) Column
m11 m21
RC RC
[T, R, S]
Homogenous
[5.0, -60.0, 0]
x y w
When converting to cartesian coords divided by zero makes infinity.
Allowing affine transformation, rotation and scaling
OpenGL loads two stacks internally: Unit matrix...
_ _ | 1 0 0 | | 0 1 0 | → glLoadIdentity() |_0 0 1_|
The Transformation Pipeline
To effect the types of transformations described in this chapter, you modify two matrices
in particular: the modelview matrix and the projection matrix. Don’t worry; OpenGL
provides some high-level functions that you can call for these transformations. After
you’ve mastered the basics of the OpenGL API, you will undoubtedly start trying some of
the more advanced 3D rendering techniques. Only then will you need to call the lowerlevel functions that actually set the values contained in the matrices.
The road from raw vertex data to screen coordinates is a long one. Figure 4.7 provides a
flowchart of this process. First, your vertex is converted to a 1×4 matrix in which the first
three values are the x, y, and z coordinates. The fourth number is a scaling factor that you
can apply manually by using the vertex functions that take four values. This is the w coordinate, usually 1.0 by default. You will seldom modify this value directly.
The Matrix: Mathematical Currency for 3D Graphics 135
4
1
2
3
1
2
3
4
4
5
6
7
8
9
0
1.5
2
42
0.877
14
FIGURE 4.7 The vertex transformation pipeline.
The vertex is then multiplied by the modelview matrix, which yields the transformed eye
coordinates. The eye coordinates are then multiplied by the projection matrix to yield clip
coordinates. OpenGL effectively eliminates all data outside this clipping space. The clip
coordinates are then divided by the w coordinate to yield normalized device coordinates.
The w value may have been modified by the projection matrix or the modelview matrix,
depending on the transformations that occurred. Again, OpenGL and the high-level
matrix functions hide this process from you.
Finally, your coordinate triplet is mapped to a 2D plane by the viewport transformation.
This is also represented by a matrix, but not one that you specify or modify directly.
OpenGL sets it up internally depending on the values you specified to glViewport.
The Modelview Matrix
The modelview matrix is a 4×4 matrix that represents the transformed coordinate system
you are using to place and orient your objects. The vertices you provide for your primitives are used as a single-column matrix and multiplied by the modelview matrix to yield
new transformed coordinates in relation to the eye coordinate system.
In Figure 4.8, a matrix containing data for a single vertex is multiplied by the modelview
matrix to yield new eye coordinates. The vertex data is actually four elements with an
extra value, w, that represents a scaling factor. This value is set by default to 1.0, and rarely
will you change it yourself.
136 CHAPTER 4 Geometric Transformations: The Pipeline
x0
y0
z0
w0
xe
ye
ze
we
xc
yc
zc
wc
xc/wc
yc/wc
zc/wc
Modelview
matrix
Projection
matrix
Viewport
transformation
Perspective
division …
…
Original
vertex data
Transformed
eye coordinates
Window coordinates
Clip
coordinates
Normalized
device coordinates
FIGURE 4.8 A matrix equation that applies the modelview transformation to a single vertex.
Translation
Let’s consider an example that modifies the modelview matrix. Say you want to draw a
cube using the GLUT library’s glutWireCube function. You simply call
glutWireCube(10.0f);
A cube that measures 10 units on a side is then centered at the origin. To move the cube
up the y-axis by 10 units before drawing it, you multiply the modelview matrix by a
matrix that describes a translation of 10 units up the y-axis and then do your drawing. In
skeleton form, the code looks like this:
// Construct a translation matrix for positive 10 Y
...
// Multiply it by the modelview matrix
...
// Draw the cube
glutWireCube(10.0f);
Actually, such a matrix is fairly easy to construct, but it requires quite a few lines of code.
Fortunately, OpenGL provides a high-level function that performs this task for you:
void glTranslatef(GLfloat x, GLfloat y, GLfloat z);
This function takes as parameters the amount to translate along the x, y, and z directions.
It then constructs an appropriate matrix and multiplies it onto the current matrix stack.
The pseudocode looks like the following, and the effect is illustrated in Figure 4.9:
// Translate up the y-axis 10 units
glTranslatef(0.0f, 10.0f, 0.0f);
// Draw the cube
glutWireCube(10.0f);
The Matrix: Mathematical Currency for 3D Graphics 137
4
= M
FIGURE 4.9 A cube translated 10 units in the positive y direction.
IS TRANSLATION ALWAYS A MATRIX OPERATION?
The studious reader may note that translations do not always require a full matrix multiplication,
but can be simplified with a simple scalar addition to the vertex position. However, for more
complex transformations that include combined simultaneous operations, it is correct to describe
translation as a matrix operation. Fortunately, if you let OpenGL do the heavy lifting for you, as
we have done here, the implementation can usually figure out the optimum method to use.
Rotation
To rotate an object about one of the three coordinate axes, or indeed any arbitrary vector,
you have to devise a rotation matrix. Again, a high-level function comes to the rescue:
glRotatef(GLfloat angle, GLfloat x, GLfloat y, GLfloat z);
Here, we perform a rotation around the vector specified by the x, y, and z arguments. The
angle of rotation is in the counterclockwise direction measured in degrees and specified by
the argument angle. In the simplest of cases, the rotation is around only one of the coordinate systems cardinal axes (X, Y, or Z).
You can also perform a rotation around an arbitrary axis by specifying x, y, and z values
for that vector. To see the axis of rotation, you can just draw a line from the origin to the
point represented by (x,y,z). The following code rotates the cube by 45° around an arbitrary axis specified by (1,1,1), as illustrated in Figure 4.10:
// Perform the transformation
glRotatef(45.0f, 1.0f, 1.0f, 1.0f);
// Draw the cube
glutWireCube(10.0f);
138 CHAPTER 4 Geometric Transformations: The Pipeline
z
x
y
10
FIGURE 4.10 A cube rotated about an arbitrary axis.
Scaling
A scaling transformation changes the size of your object by expanding or contracting all
the vertices along the three axes by the factors specified. The function
glScalef(GLfloat x, GLfloat y, GLfloat z);
multiplies the x, y, and z values by the scaling factors specified.
Scaling does not have to be uniform, and you can use it to both stretch and squeeze
objects along different directions. For example, the following code produces a cube that is
twice as large along the x- and z-axes as the cubes discussed in the previous examples, but
still the same along the y-axis. The result is shown in Figure 4.11.
// Perform the scaling transformation
glScalef(2.0f, 1.0f, 2.0f);
// Draw the cube
glutWireCube(10.0f);
The Matrix: Mathematical Currency for 3D Graphics 139
4
x
(1,1,1)
45
z
y
z
x
y
10
10
FIGURE 4.11 A nonuniform scaling of a cube.
The Identity Matrix
About now, you might be wondering why we had to bother with all this matrix stuff in
the first place. Can’t we just call these transformation functions to move our objects
around and be done with it? Do we really need to know that it is the modelview matrix
that is modified?
The answer is yes and no (but it’s no only if you are drawing a single object in your
scene). The reason is that the effects of these functions are cumulative. Each time you call
one, the appropriate matrix is constructed and multiplied by the current modelview
matrix. The new matrix then becomes the current modelview matrix, which is then multiplied by the next transformation, and so on.
Suppose you want to draw two spheres—one 10 units up the positive y-axis and one 10
units out the positive x-axis, as shown in Figure 4.12. You might be tempted to write code
that looks something like this:
// Go 10 units up the y-axis
glTranslatef(0.0f, 10.0f, 0.0f);
// Draw the first sphere
glutSolidSphere(1.0f,15,15);
// Go 10 units out the x-axis
glTranslatef(10.0f, 0.0f, 0.0f);
// Draw the second sphere
glutSolidSphere(1.0f);
140 CHAPTER 4 Geometric Transformations: The Pipeline
z
x
y
10
10
FIGURE 4.12 Two spheres drawn on the y- and x-axes.
Consider, however, that each call to glTranslate is cumulative on the modelview matrix,
so the second call translates 10 units in the positive x direction from the previous translation in the y direction. This yields the results shown in Figure 4.13.
The Matrix: Mathematical Currency for 3D Graphics 141
4
z
x
y
10
10
FIGURE 4.13 The result of two consecutive translations.
You can make an extra call to glTranslate to back down the y-axis 10 units in the negative direction, but this makes some complex scenes difficult to code and debug—not to
mention that you throw extra transformation math at the CPU or GPU. A simpler method
is to reset the modelview matrix to a known state—in this case, centered at the origin of
the eye coordinate system.
You reset the origin by loading the modelview matrix with the identity matrix. The identity
matrix specifies that no transformation is to occur, in effect saying that all the coordinates
you specify when drawing are in eye coordinates. An identity matrix contains all 0s, with
the exception of a diagonal row of 1s. When this matrix is multiplied by any vertex
matrix, the result is that the vertex matrix is unchanged. Figure 4.14 shows this equation.
Later in the chapter, we discuss in more detail why these numbers are where they are.
8 . 0
4 . 5
- 2 . 0
1 . 0
8 . 0
4 . 5
- 2 . 0
1 . 0
1.0
0
0
0
0
1.0
0
0
0
0
1.0
0
0
0
0
1.0
=
FIGURE 4.14 Multiplying a vertex by the identity matrix yields the same vertex matrix.
As we’ve already stated, the details of performing matrix multiplication are outside the
scope of this book. For now, just remember this: Loading the identity matrix means that
no transformations are performed on the vertices. In essence, you are resetting the
modelview matrix to the origin.
The following two lines load the identity matrix into the modelview matrix:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
The first line specifies that the current operating matrix is the modelview matrix. After
you set the current operating matrix (the matrix that your matrix functions are affecting),
it remains the active matrix until you change it. The second line loads the current matrix
(in this case, the modelview matrix) with the identity matrix.
Now, the following code produces the results shown earlier in Figure 4.12:
// Set current matrix to modelview and reset
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Go 10 units up the y-axis
glTranslatef(0.0f, 10.0f, 0.0f);
// Draw the first sphere
glutSolidSphere(1.0f, 15, 15);
// Reset modelview matrix again
glLoadIdentity();
// Go 10 units out the x-axis
glTranslatef(10.0f, 0.0f, 0.0f);
// Draw the second sphere
glutSolidSphere(1.0f, 15, 15);
The Matrix Stacks
Resetting the modelview matrix to identity before placing every object is not always desirable. Often, you want to save the current transformation state and then restore it after
some objects have been placed. This approach is most convenient when you have initially
transformed the modelview matrix as your viewing transformation (and thus are no
longer located at the origin).
To facilitate this procedure, OpenGL maintains a matrix stack for both the modelview and
projection matrices. A matrix stack works just like an ordinary program stack. You can
push the current matrix onto the stack with glPushMatrix to save it and then make your
changes to the current matrix. Popping the matrix off the stack with glPopMatrix then
restores it. Figure 4.15 shows the stack principle in action.
142 CHAPTER 4 Geometric Transformations: The Pipeline
FIGURE 4.15 The matrix stack in action.
TEXTURE MATRIX STACK
The texture stack is another matrix stack available to you. You use it to transform texture coordinates. Chapter 8, “Texture Mapping: The Basics,” examines texture mapping and texture coordinates and contains a discussion of the texture matrix stack.
The stack depth can reach a maximum value that you can retrieve with a call to either
glGet(GL_MAX_MODELVIEW_STACK_DEPTH);
or
glGet(GL_MAX_PROJECTION_STACK_DEPTH);
If you exceed the stack depth, you get a GL_STACK_OVERFLOW error; if you try to pop a
matrix value off the stack when there is none, you generate a GL_STACK_UNDERFLOW error.
The stack depth is implementation dependent. For the Microsoft software implementation, the values are 32 for the modelview and 2 for the projection stack.
A Nuclear Example
Let’s put to use what we have learned. In the next example, we build a crude, animated
model of an atom. This atom has a single sphere at the center to represent the nucleus
and three electrons in orbit about the atom. We use an orthographic projection, as we
have in all the examples so far in this book.
Our ATOM program uses the GLUT timer callback mechanism (discussed in Chapter 2,
“Using OpenGL”) to redraw the scene about 10 times per second. Each time the
RenderScene function is called, the angle of revolution about the nucleus is incremented.
Also, each electron lies in a different plane. Listing 4.1 shows the RenderScene function
for this example, and the output from the ATOM program is shown in Figure 4.16.
The Matrix: Mathematical Currency for 3D Graphics 143
4
glPushMatrix glPopMatrix
Matrix stack
LISTING 4.1 RenderScene Function from ATOM Sample Program
// Called to draw scene
void RenderScene(void)
{
// Angle of revolution around the nucleus
static GLfloat fElect1 = 0.0f;
// Clear the window with current clearing color
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Reset the modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Translate the whole scene out and into view
// This is the initial viewing transformation
glTranslatef(0.0f, 0.0f, -100.0f);
// Red Nucleus
glColor3ub(255, 0, 0);
glutSolidSphere(10.0f, 15, 15);
// Yellow Electrons
glColor3ub(255,255,0);
// First Electron Orbit
// Save viewing transformation
glPushMatrix();
// Rotate by angle of revolution
glRotatef(fElect1, 0.0f, 1.0f, 0.0f);
// Translate out from origin to orbit distance
glTranslatef(90.0f, 0.0f, 0.0f);
// Draw the electron
glutSolidSphere(6.0f, 15, 15);
// Restore the viewing transformation
glPopMatrix();
144 CHAPTER 4 Geometric Transformations: The Pipeline
LISTING 4.1 Continued
// Second Electron Orbit
glPushMatrix();
glRotatef(45.0f, 0.0f, 0.0f, 1.0f);
glRotatef(fElect1, 0.0f, 1.0f, 0.0f);
glTranslatef(-70.0f, 0.0f, 0.0f);
glutSolidSphere(6.0f, 15, 15);
glPopMatrix();
// Third Electron Orbit
glPushMatrix();
glRotatef(360.0f, -45.0f, 0.0f, 0.0f, 1.0f);
glRotatef(fElect1, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0f, 0.0f, 60.0f);
glutSolidSphere(6.0f, 15, 15);
glPopMatrix();
// Increment the angle of revolution
fElect1 += 10.0f;
if(fElect1 > 360.0f)
fElect1 = 0.0f;
// Show the image
glutSwapBuffers();
}
The Matrix: Mathematical Currency for 3D Graphics 145
4
FIGURE 4.16 Output from the ATOM sample program.
Let’s examine the code for placing one of the electrons, a couple of lines at a time. The first
line saves the current modelview matrix by pushing the current transformation on the stack:
// First Electron Orbit
// Save viewing transformation
glPushMatrix();
Now the coordinate system appears to be rotated around the y-axis by an angle, fElect1:
// Rotate by angle of revolution
glRotatef(fElect1, 0.0f, 1.0f, 0.0f);
The electron is drawn by translating down the newly rotated coordinate system:
// Translate out from origin to orbit distance
glTranslatef(90.0f, 0.0f, 0.0f);
Then the electron is drawn (as a solid sphere), and we restore the modelview matrix by
popping it off the matrix stack:
// Draw the electron
glutSolidSphere(6.0f, 15, 15);
// Restore the viewing transformation
glPopMatrix();
The other electrons are placed similarly.
Using Projections
In our examples so far, we have used the modelview matrix to position our vantage point
of the viewing volume and to place our objects therein. The projection matrix actually
specifies the size and shape of our viewing volume.
Thus far in this book, we have created a simple parallel viewing volume using the function
glOrtho, setting the near and far, left and right, and top and bottom clipping coordinates.
In OpenGL, when the projection matrix is loaded with the identity matrix, the diagonal
line of 1s specifies that the clipping planes extend from the origin to +1 or –1 in all directions. The projection matrix by itself does no scaling or perspective adjustments unless
you load a perspective projection matrix.
The next two sample programs, ORTHO and PERSPECT, are not covered in detail from the
standpoint of their source code. These examples use lighting and shading that we haven’t
covered yet to help highlight the differences between an orthographic and a perspective
projection. These interactive samples make it much easier for you to see firsthand how the
projection can distort the appearance of an object. If possible, you should run these examples while reading the next two sections.
146 CHAPTER 4 Geometric Transformations: The Pipeline
Orthographic Projections
The orthographic projection that we have used for most of this book so far is square on all
sides. The logical width is equal at the front, back, top, bottom, left, and right sides. This
produces a parallel projection, which is useful for drawings of specific objects that do not
have any foreshortening when viewed from a distance. This is good for 2D graphics such
as text, or architectural drawings for which you want to represent the exact dimensions
and measurements onscreen.
Figure 4.17 shows the output from the sample program ORTHO in this chapter’s subdirectory in the source distribution. To produce this hollow, tubelike box, we used an orthographic projection just as we did for all our previous examples. Figure 4.18 shows the same
box rotated more to the side so you can see how long it actually is.
Using Projections 147
4
FIGURE 4.17 A hollow square tube shown with an orthographic projection.
FIGURE 4.18 A side view showing the length of the square tube.
In Figure 4.19, you’re looking directly down the barrel of the tube. Because the tube does
not converge in the distance, this is not an entirely accurate view of how such a tube
appears in real life. To add some perspective, we must use a perspective projection.
148 CHAPTER 4 Geometric Transformations: The Pipeline
FIGURE 4.19 Looking down the barrel of the tube.
Perspective Projections
A perspective projection performs perspective division to shorten and shrink objects that
are farther away from the viewer. The width of the back of the viewing volume does not
have the same measurements as the front of the viewing volume after being projected to
the screen. Thus, an object of the same logical dimensions appears larger at the front of
the viewing volume than if it were drawn at the back of the viewing volume.
The picture in our next example is of a geometric shape called a frustum. A frustum is a
truncated section of a pyramid viewed from the narrow end to the broad end. Figure 4.20
shows the frustum, with the observer in place.
Observer
Perspective viewing volume
near
0
far
FIGURE 4.20 A perspective projection defined by a frustum.
You can define a frustum with the function glFrustum. Its parameters are the coordinates
and distances between the front and back clipping planes. However, glFrustum is not as
intuitive about setting up your projection to get the desired effects, and is typically used
for more specialized purposes (for example, stereo, tiles, asymmetric view volumes). The
utility function gluPerspective is easier to use and somewhat more intuitive for most
purposes:
void gluPerspective(GLdouble fovy, GLdouble aspect,
GLdouble zNear, GLdouble zFar);
Parameters for the gluPerspective function are a field-of-view angle in the vertical direction, the aspect ratio of the width to height, and the distances to the near and far clipping
planes (see Figure 4.21). You find the aspect ratio by dividing the width (w) by the height
(h) of the window or viewport.
Using Projections 149
4
Observer
near
fovy
h
w
far
FIGURE 4.21 The frustum as defined by gluPerspective.
Listing 4.2 shows how we change our orthographic projection from the previous examples
to use a perspective projection. Foreshortening adds realism to our earlier orthographic
projections of the square tube (see Figures 4.22, 4.23, and 4.24). The only substantial
change we made for our typical projection code in Listing 4.2 was substituting the call to
gluOrtho2D with gluPerspective.
FIGURE 4.22 The square tube with a perspective projection.
FIGURE 4.23 A side view with foreshortening.
150 CHAPTER 4 Geometric Transformations: The Pipeline
FIGURE 4.24 Looking down the barrel of the tube with perspective added.
LISTING 4.2 Setting Up the Perspective Projection for the PERSPECT Sample Program
// Change viewing volume and viewport. Called when window is resized
void ChangeSize(GLsizei w, GLsizei h)
{
GLfloat fAspect;
// Prevent a divide by zero
if(h == 0)
h = 1;
// Set viewport to window dimensions
glViewport(0, 0, w, h);
LISTING 4.2 Continued
fAspect = (GLfloat)w/(GLfloat)h;
// Reset coordinate system
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Produce the perspective projection
gluPerspective(60.0f, fAspect, 1.0, 400.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
We made the same changes to the ATOM example in ATOM2 to add perspective. Run the
two side by side, and you see how the electrons appear to be smaller as they swing far
away behind the nucleus.
A Far-Out Example
For a more complete example showing modelview manipulation and perspective projections, we have modeled the sun and the earth/moon system in revolution in the SOLAR
sample program. This is a classic example of nested transformations with objects being
transformed relative to one another using the matrix stack. We have enabled some lighting and shading for drama so that you can more easily see the effects of our operations.
You’ll learn about shading and lighting in the next two chapters.
In our model, the earth moves around the sun, and the moon revolves around the earth.
A light source is placed at the center of the sun, which is drawn without lighting to make
it appear to be the glowing light source. This powerful example shows how easily you can
produce sophisticated effects with OpenGL.
Listing 4.3 shows the code that sets up the projection and the rendering code that keeps
the system in motion. A timer elsewhere in the program triggers a window redraw 10
times a second to keep the RenderScene function in action. Notice in Figures 4.25 and
4.26 that when the earth appears larger, it’s on the near side of the sun; on the far side, it
appears smaller.
LISTING 4.3 Code That Produces the Sun/Earth/Moon System
// Change viewing volume and viewport. Called when window is resized
void ChangeSize(GLsizei w, GLsizei h)
{
GLfloat fAspect;
Using Projections 151
4
LISTING 4.3 Continued
// Prevent a divide by zero
if(h == 0)
h = 1;
// Set viewport to window dimensions
glViewport(0, 0, w, h);
// Calculate aspect ratio of the window
fAspect = (GLfloat)w/(GLfloat)h;
// Set the perspective coordinate system
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Field of view of 45 degrees, near and far planes 1.0 and 425
gluPerspective(45.0f, fAspect, 1.0, 425.0);
// Modelview matrix reset
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
// Called to draw scene
void RenderScene(void)
{
// Earth and moon angle of revolution
static float fMoonRot = 0.0f;
static float fEarthRot = 0.0f;
// Clear the window with current clearing color
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Save the matrix state and do the rotations
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
// Translate the whole scene out and into view
glTranslatef(0.0f, 0.0f, -300.0f);
// Set material color, to yellow
// Sun
glColor3ub(255, 255, 0);
152 CHAPTER 4 Geometric Transformations: The Pipeline
LISTING 4.3 Continued
glDisable(GL_LIGHTING);
glutSolidSphere(15.0f, 15, 15);
glEnable(GL_LIGHTING);
// Position the light after we draw the Sun!
glLightfv(GL_LIGHT0,GL_POSITION,lightPos);
// Rotate coordinate system
glRotatef(fEarthRot, 0.0f, 1.0f, 0.0f);
// Draw the earth
glColor3ub(0,0,255);
glTranslatef(105.0f,0.0f,0.0f);
glutSolidSphere(15.0f, 15, 15);
// Rotate from Earth-based coordinates and draw moon
glColor3ub(200,200,200);
glRotatef(fMoonRot,0.0f, 1.0f, 0.0f);
glTranslatef(30.0f, 0.0f, 0.0f);
fMoonRot+= 15.0f;
if(fMoonRot > 360.0f)
fMoonRot = 0.0f;
glutSolidSphere(6.0f, 15, 15);
// Restore the matrix state
glPopMatrix(); // Modelview matrix
// Step Earth orbit 5 degrees
fEarthRot += 5.0f;
if(fEarthRot > 360.0f)
fEarthRot = 0.0f;
// Show the image
glutSwapBuffers();
}
Using Projections 153
4
FIGURE 4.25 The sun/earth/moon system with the earth on the near side.
154 CHAPTER 4 Geometric Transformations: The Pipeline
FIGURE 4.26 The sun/earth/moon system with the earth on the far side.
Advanced Matrix Manipulation
These higher-level “canned” transformations (for rotation, scaling, and translation) are
great for many simple transformation problems. Real power and flexibility, however, are
afforded to those who take the time to understand using matrices directly. Doing so is not
as hard as it sounds, but first you need to understand the magic behind those 16 numbers
that make up a 4×4 transformation matrix.
OpenGL represents a 4×4 matrix not as a two-dimensional array of floating-point values,
but as a single array of 16 floating-point values. This approach is different from many
math libraries, which do take the two-dimensional array approach. For example, OpenGL
prefers the first of these two examples:
GLfloat matrix[16]; // Nice OpenGL friendly matrix
GLfloat matrix[4][4]; // Popular, but not as efficient for OpenGL
OpenGL can use the second variation, but the first is a more efficient representation. The
reason for this will become clear in a moment. These 16 elements represent the 4×4
matrix, as shown in Figure 4.27. When the array elements traverse down the matrix
columns one by one, we call this column-major matrix ordering. In memory, the 4×4
approach of the two-dimensional array (the second option in the preceding code) is laid
out in a row-major order. In math terms, the two orientations are the transpose of one
another.
Advanced Matrix Manipulation 155
4
a0
a1
a2
a3
a4
a5
a6
a7
a8
a9
a10
a11
a12
a13
a14
a15
FIGURE 4.27 Column-major matrix ordering.
The real magic lies in the fact that these 16 values represent a particular position in space
and an orientation of the three axes with respect to the eye coordinate system (remember
that fixed, unchanging coordinate system we talked about earlier). Interpreting these
numbers is not hard at all. The four columns each represent a four-element vector. To keep
things simple for this book, we focus our attention on just the first three elements of these
vectors. The fourth column vector contains the x, y, and z values of the transformed coordinate system’s origin. When you call glTranslate on the identity matrix, all it does is put
your values for x, y, and z in the 12th, 13th, and 14th position of the matrix.
The first three elements of the first three columns are just directional vectors that represent the orientation (vectors here are used to represent a direction) of the x-, y-, and z-axes
in space. For most purposes, these three vectors are always at 90° angles from each other,
and are usually each of unit length (unless you are also applying a scale or shear). The
mathematical term for this (in case you want to impress your friends) is orthonormal when
the vectors are unit length, and orthogonal when they are not. Figure 4.28 shows the 4×4
transformation matrix with the column vectors highlighted. Notice that the last row of
the matrix is all 0s with the exception of the very last element, which is 1.
Xx
Xy
Xz
0
Yx
Yy
Yz
0
Zx
Zy
Zz
0
Tx
Ty
Tz
1
X axis direction
Y axis direction
Z axis direction
Translation/location
FIGURE 4.28 How a 4×4 matrix represents a position and orientation in 3D space.
The most amazing thing is that if you have a 4×4 matrix that contains the position and
orientation of a different coordinate system, and you multiply a vertex (as a column
matrix or vector) by this matrix, the result is a new vertex that has been transformed to
the new coordinate system. This means that any position in space and any desired orientation can be uniquely defined by a 4×4 matrix, and if you multiply all of an object’s
vertices by this matrix, you transform the entire object to the given location and orientation in space!
HARDWARE TRANSFORMATIONS
Most OpenGL implementations have what is called hardware transform and lighting. This means
that the transformation matrix multiplies many thousands of vertices on special graphics hardware that performs this operation very, very fast. (Intel and AMD can eat their hearts out!)
However, functions such as glRotate and glScale, which create transformation matrices for you,
are usually not hardware accelerated because typically they represent an exceedingly small fraction of the enormous amount of matrix math that must be done to draw a scene.
Loading a Matrix
After you have a handle on the way the 4×4 matrix represents a given location and orientation, you may to want to compose and load your own transformation matrices. You can
load an arbitrary column-major matrix into the projection, modelview, or texture matrix
stacks by using the following function:
glLoadMatrixf(GLfloat m);
or
glLoadMatrixd(GLfloat m);
Most OpenGL implementations store and manipulate pipeline data as floats and not
doubles; consequently, using the second variation may incur some performance penalty
because 16 double-precision numbers must be converted into single-precision floats.
The following code shows an array being loaded with the identity matrix and then being
loaded into the modelview matrix stack. This example is equivalent to calling
glLoadIdentity using the higher-level functions:
// Load an identity matrix
GLfloat m[] = { 1.0f, 0.0f, 0.0f, 0.0f, // X Column
0.0f, 1.0f, 0.0f, 0.0f, // Y Column
0.0f, 0.0f, 1.0f, 0.0f, // Z Column
0.0f, 0.0f, 0.0f, 1.0f }; // Translation
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
156 CHAPTER 4 Geometric Transformations: The Pipeline
Although OpenGL implementations use column-major ordering, OpenGL (versions 1.2
and later) does provide functions to load a matrix in row-major ordering. The following
two functions perform the transpose operation on the matrix when loading it on the
matrix stack:
void glLoadTransposeMatrixf(Glfloat* m);
and
void glLoadTransposeMatrixd(Gldouble* m);
Performing Your Own Transformations
Let’s look at an example now that shows how to create and load your own transformation
matrix—the hard way! In the sample program TRANSFORM, we draw a torus (a doughnutshaped object) in front of our viewing location and make it rotate in place. The function
DrawTorus does the necessary math to generate the torus’s geometry and takes as an argument a 4×4 transformation matrix to be applied to the vertices. We create the matrix and
apply the transformation manually to each vertex to transform the torus. Let’s start with
the main rendering function in Listing 4.4.
LISTING 4.4 Code to Set Up the Transformation Matrix While Drawing
void RenderScene(void)
{
M3DMatrix44f transformationMatrix; // Storage for rotation matrix
static GLfloat yRot = 0.0f; // Rotation angle for animation
yRot += 0.5f;
// Clear the window with current clearing color
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Build a rotation matrix
m3dRotationMatrix44(transformationMatrix, m3dDegToRad(yRot),
0.0f, 1.0f, 0.0f);
transformationMatrix[12] = 0.0f;
transformationMatrix[13] = 0.0f;
transformationMatrix[14] = -2.5f;
DrawTorus(transformationMatrix);
// Do the buffer Swap
glutSwapBuffers();
}
Advanced Matrix Manipulation 157
4
We begin by declaring storage for the matrix here:
M3DMatrix44f transformationMatrix; // Storage for rotation matrix
The data type M3DMatrix44f is of our own design and is simply a typedef declared in
math3d.h for a floating-point array 16 elements long:
typedef GLfloat M3DMatrix44f[16]; // A column major 4x4 matrix of type GLfloat
The animation in this sample works by continually incrementing the variable yRot that
represents the rotation around the y-axis. After clearing the color and depth buffer, we
compose our transformation matrix as follows:
m3dRotationMatrix44(transformationMatrix, m3dDegToRad(yRot), 0.0f, 1.0f, 0.0f);
transformationMatrix[12] = 0.0f;
transformationMatrix[13] = 0.0f;
transformationMatrix[14] = -2.5f;
Here, the first line contains a call to another math3d function, m3dRotationMatrix44. This
function takes a rotation angle in radians (for more efficient calculations) and three arguments specifying a vector around which you want the rotation to occur. The macro function m3dDegToRad does an in-place conversion from degrees to radians. With the exception
of the angle being in radians instead of degrees, this is almost exactly like the OpenGL
function glRotate. The first argument is a matrix into which you want to store the resulting rotation matrix.
As you saw in Figure 4.28, the last column of the matrix represents the translation of the
transformation. Rather than do a full matrix multiplication, we can simply inject the
desired translation directly into the matrix. Now the resulting matrix represents both a
translation in space (a location to place the torus) and then a rotation of the object’s coordinate system applied at that location.
Next, we pass this transformation matrix to the DrawTorus function. We do not need to
list the entire function to create a torus here, but focus your attention to these lines:
objectVertex[0] = x0*r;
objectVertex[1] = y0*r;
objectVertex[2] = z;
m3dTransformVector3(transformedVertex, objectVertex, mTransform);
glVertex3fv(transformedVertex);
The three components of the vertex are loaded into an array and passed to the function
m3dTransformVector3. This math3d function performs the multiplication of the vertex
against the matrix and returns the transformed vertex in the array transformedVertex. We
then use the vector version of glVertex and send the vertex data down to OpenGL. The
result is a spinning torus, as shown in Figure 4.29.
158 CHAPTER 4 Geometric Transformations: The Pipeline
FIGURE 4.29 The spinning torus, doing our own transformations.
It is important that you see at least once the real mechanics of how vertices are transformed by a matrix using such a drawn-out example. As you progress as an OpenGL
programmer, you will find that the need to transform points manually will arise for tasks
that are not specifically related to rendering operations, such as collision detection
(bumping into objects), frustum culling (throwing away and not drawing things you can’t
see), and some other special effects algorithms.
For geometry processing, however, the TRANSFORM sample program is very inefficient,
despite its instructional value. We are letting the CPU do all the matrix math instead of
letting OpenGL’s dedicated hardware do the work for us (which is much faster than the
CPU!). In addition, because OpenGL has the modelview matrix, all our transformed points
are being multiplied yet again by the identity matrix. This does not change the value of
our transformed vertices, but it is still a wasted operation.
For the sake of completeness, we provide an improved example, TRANSFORMGL, that
instead uses our transformation matrix but hands it over to OpenGL using the function
glLoadMatrixf. We eliminate our DrawTorus function with its dedicated transformation
code and use a more general-purpose torus drawing function, gltDrawTorus, from the
glTools library. The relevant code is shown in Listing 4.5.
Advanced Matrix Manipulation 159
4
LISTING 4.5 Loading the Transformation Matrix Directly into OpenGL
// Build a rotation matrix
m3dRotationMatrix44(transformationMatrix, m3dDegToRad(yRot),
0.0f, 1.0f, 0.0f);
transformationMatrix[12] = 0.0f;
transformationMatrix[13] = 0.0f;
transformationMatrix[14] = -2.5f;
glLoadMatrixf(transformationMatrix);
gltDrawTorus(0.35, 0.15, 40, 20);
Adding Transformations Together
In the preceding example, we simply constructed a single transformation matrix and
loaded it into the modelview matrix. This technique had the effect of transforming any
and all geometry that followed by that matrix before being rendered. As you’ve seen in
the previous examples, we often add one transformation to another. For example, we used
glTranslate followed by glRotate to first translate and then rotate an object before being
drawn. Behind the scenes, when you call multiple transformation functions, OpenGL
performs a matrix multiplication between the existing transformation matrix and the one
you are adding or appending to it. For example, in the TRANSFORMGL example, we
might replace the code in Listing 4.5 with something like the following:
glPushMatrix();
glTranslatef(0.0f, 0.0f, -2.5f);
glRotatef(yRot, 0.0f, 1.0f, 0.0f);
gltDrawTorus(0.35, 0.15, 40, 20);
glPopMatrix();
Using this approach has the effect of saving the current identity matrix, multiplying the
translation matrix, multiplying the rotation matrix, and then transforming the torus by
the result. You can do these multiplications yourself by using the math3d function
m3dMatrixMultiply, as shown here:
M3DMatrix44f rotationMatrix, translationMatrix, transformationMatrix;
...
m3dRotationMatrix44(rotationMatrix, m3dDegToRad(yRot), 0.0f, 1.0f, 0.0f);
m3dTranslationMatrix44(translationMatrix, 0.0f, 0.0f, -2.5f);
m3dMatrixMultiply44(transformationMatrix, translationMatrix, rotationMatrix);
glLoadMatrixf(transformationMatrix);
gltDrawTorus(0.35f, 0.15f, 40, 20);
160 CHAPTER 4 Geometric Transformations: The Pipeline
OpenGL also has its own matrix multiplication function, glMultMatrix, that takes a
matrix and multiplies it by the currently loaded matrix and stores the result at the top of
the matrix stack. In our final code fragment, we once again show code equivalent to the
preceding, but this time we let OpenGL do the actual multiplication:
M3DMatrix44f rotationMatrix, translationMatrix, transformationMatrix;
...
glPushMatrix();
m3dRotationMatrix44(rotationMatrix, m3dDegToRad(yRot), 0.0f, 1.0f, 0.0f);
gltTranslationMatrix44(translationMatrix, 0.0f, 0.0f, -2.5f);
glMultMatrixf(translationMatrix);
glMultMatirxf(rotationMatrix);
gltDrawTorus(0.35f, 0.15f, 40, 20);
glPopMatrix();
As you can see, there is considerable flexibility in how you handle model transformations.
Using the OpenGL functions allows you to offload as much as possible to the graphics
hardware. Using your own functions gives you ultimate control over any intermediate
steps. The freedom to mix and match approaches as needed is another reason OpenGL is
an extremely powerful and flexible API for doing 3D graphics.
OpenGL is by default aliased.
Let's make a few new functions and implement our first procedurally generated textures.
Add the following function prototypes, defines and variables in the global section of the main.c file:
void makeCheckImage(void);
void loadTexture(void);
...
// Write the defines in CAPS
#define checkImageWidth 64
#define checkImageHeight 64
GLubyte checkImage[checkImageHeight][checkImageWidth][4];
GLuint texName;
Then implement the bodies of the new functions.
void makeCheckImage(void)
{
int i, j, c;
for (i = 0; i < checkImageHeight; i++)
{
for (j = 0; j < checkImageWidth; j++)
{
c = (((i & 0x8) == 0) ^ ((j & 0x8) == 0)) * 255; // Take byte of i and multiply it by hex value of 8 in decimal (produces a checkerboard pattern * alpha)
// ^ represents Bitwise OR
checkImage[i][j][0] = (GLubyte) c; // R
checkImage[i][j][1] = (GLubyte) c; // G
checkImage[i][j][2] = (GLubyte) c; // B
checkImage[i][j][3] = 255; // A (Changing the value will not affect the output of this code)
}
}
}
Next we add the function body for our new texture generating function:
void loadTexture(void)
{
makeCheckImage();
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
}
Add this to initialize().
glEnable(GL_TEXTURE_2D);
loadTexture();
Then you add this into display()
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-2.0, -1.0, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-2.0, 1.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.0f, -1.0f, 0.0f);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(2.41421f, 1.0f, -1.41421f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(2.41421f, -1.0f, -1.41421f);
glEnd();
If you compile and run this project you'll see a procedurally generated texture of a checkerboard pattern facing the screen on the left and skewed to the right on the right side.
MSDN documentation about bitwise operators: https://docs.microsoft.com/en-us/cpp/c-language/c-bitwise-operators?view=msvc-170
Let start free drawing some real world objects, like a house, and add some texturing to it.
Begin by adding a new file called Resource.rc and a texture.h, like in lesson 22. Remember to to enable depth testing, adding GL_DEPTH_BUFFER_BIT in display and include texture.h in main, and add the load_texture-function.
Then lets declare some global variables to use for our textures. For simplicity sake I making adding a function prototype and GLuint texture variables just below our pervious code:
void draw_house(void);
GLuint house_front_texture, house_left_texture, house_right_texture, house_back_texture;
GLuint roof_left_texture, roof_right_texture;
GLuint door_texture;
GLuint chimney_texture;
Add the following code into initialize:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glClearDepth(1.0f);
load_texture(&house_front_texture, MAKEINTRESOURCE(IDI_TEXTURE_FRONT));
load_texture(&house_left_texture, MAKEINTRESOURCE(IDI_TEXTURE_LEFT));
load_texture(&house_right_texture, MAKEINTRESOURCE(IDI_TEXTURE_RIGHT));
load_texture(&house_back_texture, MAKEINTRESOURCE(IDI_TEXTURE_BACK));
load_texture(&door_texture, MAKEINTRESOURCE(IDI_TEXTURE_DOOR));
load_texture(&roof_left_texture, MAKEINTRESOURCE(IDI_TEXTURE_ROOF_LEFT));
load_texture(&roof_right_texture, MAKEINTRESOURCE(IDI_TEXTURE_ROOF_RIGHT));
load_texture(&chimney_texture, MAKEINTRESOURCE(IDI_TEXTURE_CHIMNEY));
The lets implement the function prototype of draw_house
void draw_house(void)
{
glBindTexture(GL_TEXTURE_2D, house_front_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, house_right_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(1.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 0.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, house_left_texture);
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.0f, 0.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.0f, 1.0f, -1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glEnd();
glColor3f(1.0f, 1.0f, 1.0f);
glBindTexture(GL_TEXTURE_2D, house_back_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 1.0f, -1.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, roof_left_texture);
glBegin(GL_QUADS);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.5f, 1.5f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.5f, 1.5f, -1.0f);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 1.0f, -1.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, roof_right_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.5f, 1.5f, -1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.5f, 1.5f, 0.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, house_front_texture);
glBegin(GL_TRIANGLES);
glTexCoord2f(0.2f, 0.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glTexCoord2f(0.8f, 0.0f);
glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0f, 0.5f);
glVertex3f(0.5f, 1.5f, 0.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, house_front_texture);
glBegin(GL_TRIANGLES);
glTexCoord2f(0.8f, 0.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glTexCoord2f(0.2f, 0.0f);
glVertex3f(0.0f, 1.0f, -1.0f);
glTexCoord2f(0.5f, 1.0f);
glVertex3f(0.5f, 1.5f, -1.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, door_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.4f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.6f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.6f, 0.6f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.4f, 0.6f, 0.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, chimney_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.3f, 1.2f, -0.4f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.5f, 1.2f, -0.4f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.5f, 1.8, -0.4f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.3, 1.8, -0.4f);
glEnd();
glBindTexture(GL_TEXTURE_2D, chimney_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.5f, 1.2f, -0.4f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.5, 1.8f, -0.4f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.5f, 1.8f, -0.6f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.5f, 1.2f, -0.6f);
glEnd();
glBindTexture(GL_TEXTURE_2D, chimney_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.5f, 1.2, -0.6f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.5f, 1.8f, -0.6f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.3f, 1.8f, -0.6f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.3f, 1.2f, -0.6f);
glEnd();
glBindTexture(GL_TEXTURE_2D, chimney_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.3f, 1.2f, -0.6f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(0.3f, 1.2f, -0.4f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(0.3f, 1.8f, -0.4f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.3f, 1.8f, -0.6f);
glEnd();
}
Then just add the draw_house and a static float rotate variable into display.
static float rotate = 0.0f;
rotate += 0.25f;
glRotatef(rotate, 0.0f, 1.0f, 0.0f);
draw_house();
In a future iteration of this tutorial add a function to place point (p.x, p.y, p.z) in each vertices, like struct, and think of better naming conversion for the texture variables.
typedef struct point {
float x, y, z;
} p;
void add_points(x, y, z) {
p.x = x;
p.y = y;
p.z = z;
}
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
// LESSON 29
void mandelbrot(void);
struct type_rgb { float r; float g; float b; };
// Holds the size of the mandelbrot shape (pixels contains the colorval for the pixel, pattern is a predefined set of colors)
struct type_rgb pixels[841 * 1440], pattern[999];
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_KEYDOWN:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
int i;
float r, g, b;
// All pixels are initialized to white (841 * 1440 white pixels)
for (i = 0; i < 841 * 1440; i++)
{
pixels[i].r = 1;
pixels[i].g = 1;
pixels[i].b = 1;
}
i = 0;
for (r = 0.1f; r <= 0.9f; r = r + 0.1f)
{
for (g = 0.1f; g <= 0.9f; g = g + 0.1f)
{
for (b = 0.1f; b <= 0.9f; b = b + 0.1f)
{
// This is a simple way to manipulate the colors
pattern[i].r = b;
pattern[i].g = r;
pattern[i].b = g;
// Fills 729 different color patterns
/*pattern[i].r = r;
pattern[i].g = g;
pattern[i].b = b;*/
i++;
}
}
}
// Reinitializing the remaining patterns to white
/*for (; i <= 999; i++)
{
pattern[i].r = 1;
pattern[i].g = 1;
pattern[i].b = 1;
}*/
mandelbrot();
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDrawPixels(1440, 841, GL_RGB | GL_STENCIL_INDEX, GL_FLOAT, pixels);
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
void mandelbrot(void)
{
// Mandelbrot is a complex equation (contains one real number and one imaginary number)
// x0 = real part of C value (range: -2.5, 1.1)
// y0 = imaginary part (range: -1, 1.1)
float x0, y0, x, y, xtemp;
// iteration: controlled number of iterations
// max_iteration: the maximum number of iterations
// loc = location of the current [x, y]
int iteration, max_iteration, loc = 0;
// Zn = complex number
// Mandelbrot Equation = Zn^2 + C
// complex number = real part + imaginary part
// real part = x0
// imaginary part = y0 [-1, 1]
// Complex Number = x0 + y0
// https://simple.wikipedia.org/wiki/Mandelbrot_set#:~:text=The%20Mandelbrot%20set%20can%20be,positive%20integer%20(natural%20number).
// Zn = (x0^2 + y0^2)
for (y0 = -1; y0 < 1.1f; y0 = y0 + 0.0025f)
{
for (x0 = -2.5f; x0 < 1.1f; x0 = x0 + 0.0025f)
{
x = 0;
y = 0;
iteration = 0;
max_iteration = 1000;
for (iteration = 0; ((x * x) + (y * y) + 1.5 < (2 * 2)) && (iteration < max_iteration); iteration = iteration + 1)
{
xtemp = (x * x) - (y * y) + x0;
y = (2 * x * y) + y0;
x = xtemp;
pixels[loc].r = pattern[iteration].r;
pixels[loc].g = pattern[iteration].g;
pixels[loc].b = pattern[iteration].b;
}
if (iteration >= 999)
{
pixels[loc].r = 0;
pixels[loc].g = 0;
pixels[loc].b = 0;
}
loc = loc + 1;
}
}
}
This code doesn't resize with the window, so implement that...
Identity matrix (known as a unit matrix)...
1x + 2y + 3z = 100
4x + 5y + 6z = 200
7x + 8y + 3z = 300
A matrix is a vehical for transformations.
x y z
-- --
| 1 0 0 |
| 0 1 0 |
| 0 0 1 |
-- --
x y z
-- -- - - -- --
| 1 2 3 | | x | | 100 |
| 4 5 6 | X | y | = | 200 |
| 7 8 3 | | z | | 300 |
-- -- - - -- --
x y z
-- --
| 1 0 0 0 | | x | | x' |
| 0 1 0 0 | | y | | y' |
| 0 0 1 0 | X | z | = | z' |
| 0 0 0 1 | | | | |
-- --
If you have a transformation matrix like this: glTranslatef(3.0f, 5.0f, -8.0f):
x y z
-- -- - - - -
| 1 0 0 tx | | x | | x' |
| 0 1 0 ty | | y | | y' |
| 0 0 1 tz | X | z | = | z' |
| 0 0 0 1 | | 1 | | 1 |
-- -- - - - -
-- --
| 1.0f 0.0f 0.0f 0.0f |
| 0.0f cos θ sin θ 0.0f |
| -sin θ cos θ 0.0f 0.0f |
| 0.0f 0.0f 0.0f 1.0f |
-- --
You can add the matrices in your display function:
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
// LESSON 30
GLfloat identityMatrix[16];
identityMatrix[0] = 1.0f;
identityMatrix[1] = 0.0f;
identityMatrix[2] = 0.0f;
identityMatrix[3] = 0.0f;
identityMatrix[4] = 0.0f;
identityMatrix[5] = 1.0f;
identityMatrix[6] = 0.0f;
identityMatrix[7] = 0.0f;
identityMatrix[8] = 0.0f;
identityMatrix[9] = 0.0f;
identityMatrix[10] = 1.0f;
identityMatrix[11] = 0.0f;
identityMatrix[12] = 0.0f;
identityMatrix[13] = 0.0f;
identityMatrix[14] = 0.0f;
identityMatrix[15] = 1.0f;
GLfloat translationMatrix[16];
translationMatrix[0] = 1.0f;
translationMatrix[1] = 0.0f;
translationMatrix[2] = 0.0f;
translationMatrix[3] = 0.0f;
translationMatrix[4] = 0.0f;
translationMatrix[5] = 1.0f;
translationMatrix[6] = 0.0f;
translationMatrix[7] = 0.0f;
translationMatrix[8] = 0.0f;
translationMatrix[9] = 0.0f;
translationMatrix[10] = 1.0f;
translationMatrix[11] = 0.0f;
translationMatrix[12] = 0.0f;
translationMatrix[13] = 0.0f;
translationMatrix[14] = -3.0f;
translationMatrix[15] = 1.0f;
glLoadMatrixf(identityMatrix);
glMultMatrixf(translationMatrix);
static GLfloat angle = 0.0f;
GLfloat rotationMatrix[16];
rotationMatrix[0] = 1.0f;
rotationMatrix[1] = 0.0f;
rotationMatrix[2] = 0.0f;
rotationMatrix[3] = 0.0f;
rotationMatrix[4] = 0.0f;
rotationMatrix[5] = cos(angle);
rotationMatrix[6] = sin(angle);
rotationMatrix[7] = 0.0f;
rotationMatrix[8] = 0.0f;
rotationMatrix[9] = -(sin(angle));
rotationMatrix[10] = cos(angle);
rotationMatrix[11] = 0.0f;
rotationMatrix[12] = 0.0f;
rotationMatrix[13] = 0.0f;
rotationMatrix[14] = 0.0f;
rotationMatrix[15] = 1.0f;
glMultMatrixf(rotationMatrix);
angle += 0.01f;
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex2f(0.0f, 1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex2f(-1.0f, -1.0f);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex2f(1.0f, -1.0f);
glEnd();
SwapBuffers(g_hdc);
}
Case study:
| cos θ -sin θ 0 0 |
Rz(theta) = | sin θ cos θ 0 0 |
| 0 0 1 0 |
| 0 0 0 1 |
| cos θ 0 -sin θ 0 |
Ry(theta) = | 0 1 0 0 |
| sin θ 0 cos θ 0 |
| 0 0 0 1 |
| 1 0 0 0 |
Rx(theta) = | 0 cos θ -sin θ 0 |
| 0 sin θ cos θ 0 |
| 0 0 0 1 |
https://cupdf.com/document/perspective-projections-opengl-viewing-3d-projections-opengl-viewing-3d-clipping.html
This time we are going to simulate a planet orbiting a sun using the GLU way of drawing spheres. Start by adding these global variables to your main.c file:
GLfloat year = 0;
GLfloat day = 0;
GLUquadric* quadric = NULL;
We'll handle the movement manually for now, so add the follwing code into your WndProc in WM_KEYDOWN:
case 'y':
year = (int)(year + 3) % 360;
break;
case 'Y':
year = (int)(year - 3) % 360;
break;
case 'd':
day = (int)(day + 6) % 360;
break;
case 'D':
day = (int)(day - 6) % 360;
break;
This will rotate the planet around the orbit of the sun when user presses y or Y, and rotate the planet itself when the user presses d or D.
To enable depth testing so the rotation will look like it's hidden behind the sun you need to enable depth testing (in initialize):
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
To ensure that you can see the sun and planet, add the gluPerspective into your resize function:
gluPerspective(45.0f, (GLfloat)w/(GLfloat)h, 0.1f, 100.0f);
Finally we position the camera and add each object into the scene:
gluLookAt(
0.0f, 0.0f, 5.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix(); // OGL handles two different stacks, proj_stack & transf_stack
glRotatef(90.0f, 1.0f, 0.0f, 0.0f); // This is pushed on the transformation stack
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); // Handle polygon filling, GL_FILL is oposite of wireframe
quadric = gluNewQuadric(); // Create the quadric object
glColor3f(1.0f, 1.0f, 0.0f); //
gluSphere(quadric, 0.75f, 30.0f, 30.0f); // Pass the object to the quadric sphere, radius = 75, 30, longtitude, 30 latitude (resolution of the sphere)
glPopMatrix(); // Returns the stack to the original projection matrix
glPushMatrix(); // Second pushed transformation stack
glRotatef((GLfloat)year, 0.0f, 1.0f, 0.0f); //
glTranslatef(1.5f, 0.0f, 0.0f); //
glRotatef((GLfloat)90.0f, 1.0f, 0.0f, 0.0f); // Without this call the sphere is rendered in the y direction..
glRotatef((GLfloat)day, 0.0f, 0.0f, 1.0f); //
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); //
quadric = gluNewQuadric(); //
glColor3f(0.4f, 0.9f, 1.0f); //
gluSphere(quadric, 0.2f, 20.0f, 20.0f); //
glPopMatrix();
Now lets add a moon orbiting the planet to continue our space simulation or orbiting bodies in space.
First we need to declare a variable that controlls the moon, so add the following into the global namespace:
GLfloat moon_revolution = 0.0f;
The moon should orbit the planet we generated in the previous lesson in the same way the planet orbits the sun, therefor you have to add another glPushMatrix / glPopMatrix inside the stack that controls the planet orbitation:
glPushMatrix();
glRotatef((GLfloat)year, 0.0f, 1.0f, 0.0f);
glTranslatef(1.5f, 0.0f, 0.0f);
glRotatef((GLfloat)90.0f, 1.0f, 0.0f, 0.0f);
glRotatef((GLfloat)day, 0.0f, 0.0f, 1.0f);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
quadric = gluNewQuadric();
glColor3f(0.4f, 0.9f, 1.0f);
gluSphere(quadric, 0.2f, 20.0f, 20.0f);
... // Add the new substack here
glPopMatrix();
The new substack should look exactly like the previous ones except for the size of the moon. I've also chosen to draw the with filled glPolygonMode to improve the visibility.
glPushMatrix();
// Earth is rotating, so moon should rotate as well
glRotatef((GLfloat)day, 0.0f,0.0f, 1.0f);
glTranslatef(0.5f, 0.0f, 0.0f);
// Controls the moons own axis
glRotatef((GLfloat)moon_revolution, 0.0f, 0.0f, 1.0f);
// Rotates to it's correct positional axis
glRotatef(90.0f, 1.0f, 0.0f, 0.0f);
// Creating the moons sphere
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
quadric = gluNewQuadric();
glColor3f(0.8f, 0.8f, 0.8f);
gluSphere(quadric, 0.1f, 15.0f, 15.0f);
glPopMatrix();
Lastly we update the WndProc to add orbitational movement when the user presses the d or D button on the keyboard.
Note that we have changed the case in our switch-statement to case WM_CHAR: to handle the keyboard input correctly
...
case WM_CHAR:
...
case 'd':
day = (int)(day + 6) % 360;
moon_revolution = (int)(moon_revolution + 9) % 360;
break;
case 'D':
day = (int)(day - 6) % 360;
moon_revolution = (int)(moon_revolution + 9) % 360;
break;
...
If you compile and run your program now, then press lower- or uppercase d or lower- or uppercase y you'll see the planet orbiting around the sun, with the moon orbiting around the planet.
Complete code from this lesson
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows")
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
GLfloat year = 0;
GLfloat day = 0;
GLUquadric* quadric = NULL;
GLfloat moon_revolution = 0.0f;
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
// case WM_KEYDOWN:
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
case 'y':
year = (int)(year + 3) % 360;
break;
case 'Y':
year = (int)(year - 3) % 360;
break;
case 'd':
day = (int)(day + 6) % 360;
moon_revolution = (int)(moon_revolution + 9) % 360;
break;
case 'D':
day = (int)(day - 6) % 360;
moon_revolution = (int)(moon_revolution + 9) % 360;
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w/(GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// LESSON 31
gluLookAt(
0.0f, 0.0f, 5.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix();
glRotatef(90.0f, 1.0f, 0.0f, 0.0f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
quadric = gluNewQuadric();
glColor3f(1.0f, 1.0f, 0.0f);
gluSphere(quadric, 0.75f, 30.0f, 30.0f);
glPopMatrix();
glPushMatrix();
glRotatef((GLfloat)year, 0.0f, 1.0f, 0.0f);
glTranslatef(1.5f, 0.0f, 0.0f);
glRotatef((GLfloat)90.0f, 1.0f, 0.0f, 0.0f);
glRotatef((GLfloat)day, 0.0f, 0.0f, 1.0f);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
quadric = gluNewQuadric();
glColor3f(0.4f, 0.9f, 1.0f);
gluSphere(quadric, 0.2f, 20.0f, 20.0f);
glPushMatrix();
glRotatef((GLfloat)day, 0.0f,0.0f, 1.0f);
glTranslatef(0.5f, 0.0f, 0.0f);
glRotatef((GLfloat)moon_revolution, 0.0f, 0.0f, 1.0f);
glRotatef(90.0f, 1.0f, 0.0f, 0.0f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
quadric = gluNewQuadric();
glColor3f(0.8f, 0.8f, 0.8f);
gluSphere(quadric, 0.1f, 15.0f, 15.0f);
glPopMatrix();
glPopMatrix();
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
This time we'll create a moving robotic arm, with joints that seem connected.
Begin by declaring three new variables in the global scope:
GLfloat shoulder = 0.0f;
GLfloat elbow = 0.0f;
GLUquadric* quadric = NULL;
Now we update our display-function with the following:
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glColor3f(0.5f, 0.35f, 0.005f);
glTranslatef(0.0f, 0.0f, -12.0f);
glPushMatrix();
glRotatef((GLfloat)shoulder, 0.0f, 0.0f, 1.0f);
glTranslatef(1.0f, 0.0f, 0.0f);
glPushMatrix();
glScalef(2.0f, 0.5f, 1.0f);
quadric = gluNewQuadric();
gluSphere(quadric, 0.5f, 10.0f, 10.0f);
glPopMatrix();
glTranslatef(1.0f, 0.0f, 0.0f);
glRotatef((GLfloat)elbow, 0.0f, 0.0f, 1.0f);
glTranslatef(1.0f, 0.0f, 0.0f);
glPushMatrix();
glScalef(2.0f, 0.5f, 1.0f);
quadric = gluNewQuadric();
gluSphere(quadric, 0.5, 10.0f, 10.0f);
glPopMatrix();
glPopMatrix();
And lastely we update out WndProc with the following code:
case 'S':
shoulder = (int)(shoulder + 3) % 360;
break;
case 's':
shoulder = (int)(shoulder - 3) % 360;
break;
case 'E':
elbow = (int)(elbow + 3) % 360;
break;
case 'e':
elbow = (int)(elbow - 3) % 360;
break;
}
Now the user can rotate the shoulder and elbow of the robotic arm by pressing 'e' or 's' on the keyboard.
Complete code from this lesson
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
GLfloat shoulder = 0.0f;
GLfloat elbow = 0.0f;
GLUquadric* quadric = NULL;
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
case 'S':
shoulder = (int)(shoulder + 3) % 360;
break;
case 's':
shoulder = (int)(shoulder - 3) % 360;
break;
case 'E':
elbow = (int)(elbow + 3) % 360;
break;
case 'e':
elbow = (int)(elbow - 3) % 360;
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
int i;
float r, g, b;
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w/(GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glColor3f(0.5f, 0.35f, 0.005f);
glTranslatef(0.0f, 0.0f, -12.0f);
glPushMatrix();
glRotatef((GLfloat)shoulder, 0.0f, 0.0f, 1.0f);
glTranslatef(1.0f, 0.0f, 0.0f);
glPushMatrix();
glScalef(2.0f, 0.5f, 1.0f);
quadric = gluNewQuadric();
gluSphere(quadric, 0.5f, 10.0f, 10.0f);
glPopMatrix();
glTranslatef(1.0f, 0.0f, 0.0f);
glRotatef((GLfloat)elbow, 0.0f, 0.0f, 1.0f);
glTranslatef(1.0f, 0.0f, 0.0f);
glPushMatrix();
glScalef(2.0f, 0.5f, 1.0f);
quadric = gluNewQuadric();
gluSphere(quadric, 0.5, 10.0f, 10.0f);
glPopMatrix();
glPopMatrix();
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Today we'll begin procedurally drawing a circle. Lets import the math-library and declare a function prototype:
...
#define _USE_MATH_DEFINES 1
#include <math.h>
...
void draw_sphere(float, int);
Our function implemention takes a float r for radius, and an int n for number of sides in our triangle strip.
Then we implement the drawing function to procedurally generate a circle. We declare the local variables we need to work with: i, j, phi1 Φ, phi2 ρ, theta θ, s and t.
We use the i and j as loop variables, and phi1, phi2 and theta to define our circle... s and j are used as texture coordinates (??) and is not implemented in this lesson.
Next we declare ex, ey and ez, then px, py and pz, all as GLfloat. Those define the (...something size of the circle??)
void draw_sphere(float r, int n)
{
int i, j;
GLdouble phi1, phi2, theta, s, t;
GLfloat ex, ey, ez;
GLfloat px, py, pz;
if (r < 0) r = -r;
if (n < 0) n = -n;
// Since triangle only has three sides...
if (n < 4 || r <= 0) {
// Calculates the origon of the circle
glBegin(GL_POINTS);
glVertex3f(0.0f, 0.0f, 0.0f);
glEnd();
return;
}
for (j = 0; j < n; j++) {
phi1 = j * M_PI * 2 / n;
phi2 = (j + 1) * M_PI * 2 / n;
// Calculates two more points, and draws it using GL_TRINGLE_STRIP, then moves phi1
glBegin(GL_TRIANGLE_STRIP);
for (i = 0; i <= n; i++) {
theta = i * M_PI / n;
ex = sin(theta) * cos(phi2);
ey = sin(theta) * sin(phi2);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
glVertex3f(px, py, pz);
ex = sin(theta) * cos(phi1);
ey = sin(theta) * sin(phi1);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
glVertex3f(px, py, pz);
}
glEnd();
}
}
We define our center of the circle using the if (n < 4 || r <= 0) and then we append another two points in or for-loop, and increment the 2D circle by phi2 = (j + 1) * M_PI * 2 / n
The in our display we add the viewport transformation and call the newly created function:
glTranslatef(0.0f, 0.0f, -3.0f);
draw_sphere(0.2f, 60);
Compile and run the program and you'll see a white, 3 dimentional circle in the middle of the screen.
For more details about calculating a circle take a look at these links:
https://mathinsight.org/spherical_coordinates
https://mathinsight.org/spherical_coordinates
So far this tutorial has been focusing on the lower leverl stuff, and drawing various shapes, but what is a complete scene without lighting and shading?
This lesson we'll start adding lighting to our scene, so as always, lets declare some variables to use when adding light. In our global space add the following:
#include <GL/glu.h>
...
bool bLight = false;
GLfloat light_ambient[] = { 0.5f, 0.5f, 0.5f, 1.0f };
GLfloat light_diffuse[] = { 1.0f, 1.0f, 1.0f, 0.0f };
GLfloat light_position[] = { 0.0f, 0.0f, 2.0f, 1.0f };
The first variable is just a boolean switch to turn lighting on or off. We set the bLight variable to false. Next up we define the basics of our lighting model, as our ambient light, diffuse light and the light position.
Ambient light is light that origins from an unknown source, think of it as the light that is scattered in the scene based on what light exists otherwise. Diffuse light is the known source of the light, eg. the sun or a lamp emitting light. Lastly we add the light position where the light origins form, which is drawn into our scene from that position.
When creating a diffuse light the fourth input in the array is either 0.0f for an directional light (a position) or 1.0f for a omnipresent lightsource (a direction), and tells if the light is a omnipresent source, like the sun or a local element, like a lamp.
Light has to be enabled for OpenGL to use it, so we need to update our initialize to include lighting in our scene.
...
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glShadeModel(GL_SMOOTH);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
...
In this example we only implement one light source, and we use glClearDepth() so that we clear our perspective[...]. Lighting is enabled with glEnable, and Legacy OpenGL supports up to 8 light sources pr scene (GL_LIGHT0 - GL_LIGHT7).
We also add the three variables we added in the global space using glLightfv to add the GL_AMBIENT, GL_DIFFUSE and GL_POSITION for our lighting model.
glShadeModel is used to enable the way lighting and shading is drawn, so we add GL_SMOOTH to use the built in Geraud Shading. You can also use GL_FLAT. Then we tell OpenGL that we want the GL_NICEST for of Perspective correction, so minimize jagged edges.
The on to our model drawing routing in display. Since light is calculated from it's origin into any given point in our scene we have to define the normals in our scene so that lighting will be calculated correctly. A normal is a perpendicular ray to a particular face on our model.
Lets draw a simple cube which will be illustrated using our light source that we have defined so far.
...
glBegin(GL_QUADS);
// TOP
glNormal3f(0.0f, 1.0f, 0.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 1.0f);
// BOTTOM
glNormal3f(0.0f, -1.0f, 0.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
// FRONT
glNormal3f(0.0f, 0.0f, 1.0f);
glVertex3f(1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
// BACK
glNormal3f(0.0f, 0.0f, -1.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
// RIGHT
glNormal3f(1.0f, 0.0f, 0.0f);
glVertex3f(1.0f, 1.0f, -1.0f);
glVertex3f(1.0f, 1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
// LEFT
glNormal3f(-1.0f, 0.0f, 0.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glEnd();
...
All that is left now is to add the lighting to our scene, which we do in the WndProc in our WM_CHAR, so we can turn the light on or off using lowercase or uppercase L.
...
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
...
If you compile and run your program now, you'll see the cube is drawn as a completly white, unlit cube, or if you press L as lit by our lightsource.
This time we'll go back to our rendering of a single lightsource, but this time we will add the material component to the lighting as well.
We'll declare the variables we need to handle the light and material in the global scope.
#include <GL/glu.h>
...
bool bLight = false;
GLUquadric* quadric = NULL;
GLfloat light_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat light_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat light_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat light_position[] = { 100.0f, 100.0f, 100.0f, 1.0f };
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f };
...
We use the boolean value to handle enabeling or disabeling the lighting in the scene. Then we use the GLU librarys quadric function to draw an object to be lit.
Now comes the interesting part: We define our lightings component with light_ambient, light_diffuse, light_specular and light_position. OpenGL treats this as the lightsource in the scene, and you can think of it as:
Light independent of the object
Next we define the lighting material components with material_ambient, material_diffuse, material_specular and material_shininess. This is the component in the lighting that mixes with the lightsource to form a semi-realistic lighting model. Material can be thought of as:
Lighting that resembles the object
We have to tell OpenGL that we want to enable lighting and material components, so add the following code to our initialize:
...
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
glClearDepth(1.0f);
glDepthFunc(GL_LEQUAL);
glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
glShadeModel(GL_SMOOTH);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
...
Nothing new, except for the various initialization of glMaterialv(...).
Continuing on we define the display logic by adding:
...
glTranslatef(0.0f, 0.0f, -0.7f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
quadric = gluNewQuadric();
gluSphere(quadric, 0.2f, 30.0f, 30.0f);
...
Short and simple; define the viewing position by traslating the view origin, tell OpenGL we want to draw filled polygons and handle the drawing of an object the GLU way - a sphere in this example.
Last but not least we give the user a way to enable / disable the lighting when pressing 'l' (or 'L') inside our WndProc:
...
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
...
If you compile and run this program you should see a white sphere being drawn at the center of the screen, and it will turn into a grey-ish looking sphere with lighting, specular, reflection and material drawn on it when pressing L!
In todays lecture we'll implement multiple lightsources into our program, so make a simple scene (a 3D triangle) with multiple lightsources.
Begin by adding a few variables in the global scope:
bool bLight = false;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
};
struct Light light[2] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 2.0f, 0.0f, 0.0f, 1.0f }
}
};
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f };
We declare our boolean switch to enable or disable the lighting, then we add a struct to hold the positional data for the light source. After that we initialize the lighting data with a lightsource coming from the left and one from the right.
The data you store in the struct holding the lighting data is declared in the same order as they were declared in the struct definition!
Then we initialize the lighting and depth functions for OpenGL to handle lighting correctly with the following variables in our initialize:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
To have something to shine a light on we have to add something in our display function, so we draw a 3 dimensional triangle with precalculated normals:
static float angle = 0.0f;
glTranslatef(0.0f, 0.0f, -7.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glBegin(GL_TRIANGLES);
glNormal3f(0.0f, 0.447214f, 0.894427f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glNormal3f(0.894427f, 0.447214f, 0.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glNormal3f(0.0f, 0.447214f, -0.894427f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glNormal3f(-0.894427f, 0.447214f, 0.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glEnd();
angle += 0.05f;
To end things off we add a user enabled switch to turn on or off the lightings in our WndProc:
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
Just put it below any other WM_CHAR code you may already have in there.
If you compile and run the program you should see a white, rotating triangle, and if you press l or L it will be lit up with a red and a blue color coming from the left and right side of the triangle.
The entire code for this lesson is found below
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
bool bLight = false;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
};
struct Light light[2] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 2.0f, 0.0f, 0.0f, 1.0f }
}
};
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f };
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
static float angle = 0.0f;
glTranslatef(0.0f, 0.0f, -7.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glBegin(GL_TRIANGLES);
glNormal3f(0.0f, 0.447214f, 0.894427f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glNormal3f(0.894427f, 0.447214f, 0.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, -1.0f, 1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glNormal3f(0.0f, 0.447214f, -0.894427f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(1.0f, -1.0f, -1.0f);
glNormal3f(-0.894427f, 0.447214f, 0.0f);
glVertex3f(0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glEnd();
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
In this lesson we'll again render muliple lightsources, but we'll begin adding transformation to the lightsources. We begin by declaring the variables we need in the global scope:
bool bLight = false;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
GLfloat angle;
};
struct Light light[3] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
}
};
GLUquadric* quadric = NULL;
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f, 50.0f, 50.0f };
This time we add a variable to handle the angle that the light is pointing in, and initialize all the light variables by defining our struct Light. We'll also declare a sphere using GLU and define the material for lightsources.
Now we initialize the lighting, the depth handling and shading model in our initialize:
...
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHT2);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT2, GL_AMBIENT, light[2].ambient);
glLightfv(GL_LIGHT2, GL_SPECULAR, light[2].specular);
glLightfv(GL_LIGHT2, GL_DIFFUSE, light[2].diffuse);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
...
Filling in the display with the lighting, material and the model to be lit, we'll add some rotation to the lightsources, so they will move across the surface of the sphere:
First we instialize the camera angle using gluLookAt and setting it three units back, otherwise pointing in the direction of the object that's being lit.
Next we add a matrix transformation for the first light in the scene, and do the same for the remaining two.
In the end we increment the lightsources position by the floating value of 0.1 so they scan across the surface of the sphere.
...
glPushMatrix();
gluLookAt(
0.0f, 0.0f, 3.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix();
glRotatef(light[0].angle, 1.0f, 0.0f, 0.0f);
// Updating the position of the light[0], this is startig for the y-direction
light[0].position[1] = light[0].angle;
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glPopMatrix();
glPushMatrix();
glRotatef(light[1].angle, 0.0f, 1.0f, 0.0f);
// Update the light[1] by a rotation, like above
light[1].position[0] = light[1].angle;
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glPopMatrix();
glPopMatrix();
glPushMatrix();
glRotatef(light[2].angle, 0.0f, 0.0f, 1.0f);
light[2].position[0] = light[2].angle;
glLightfv(GL_LIGHT2, GL_POSITION, light[2].position);
glPopMatrix();
glPushMatrix();
glTranslatef(0.0f, 0.0f, -0.7f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
quadric = gluNewQuadric();
gluSphere(quadric, 0.2f, 80.0f, 80.0f);
glPopMatrix();
light[0].angle += 0.1f;
light[1].angle += 0.1f;
light[2].angle += 0.1f;
...
Go a head and compile your program, and you'll see an unlit, white sphere, but when you press l or L you'll add the lightsources we just added.
The entire sourcecode for this project is found below:
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
bool bLight = false;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
GLfloat angle;
};
struct Light light[3] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f }, // Red diffuse light
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f }, // Green diffuse light
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f }, // Blue diffuse light
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
}
};
GLUquadric* quadric = NULL;
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f, 50.0f, 50.0f };
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 38
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHT2);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT2, GL_AMBIENT, light[2].ambient);
glLightfv(GL_LIGHT2, GL_SPECULAR, light[2].specular);
glLightfv(GL_LIGHT2, GL_DIFFUSE, light[2].diffuse);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
gluLookAt(
0.0f, 0.0f, 3.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix();
glRotatef(light[0].angle, 1.0f, 0.0f, 0.0f);
// Updating the position of the light[0], this is startig for the y-direction
light[0].position[1] = light[0].angle;
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glPopMatrix();
glPushMatrix();
glRotatef(light[1].angle, 0.0f, 1.0f, 0.0f);
// Update the light[1] by a rotation, like above
light[1].position[0] = light[1].angle;
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glPopMatrix();
glPopMatrix();
glPushMatrix();
glRotatef(light[2].angle, 0.0f, 0.0f, 1.0f);
light[2].position[0] = light[2].angle;
glLightfv(GL_LIGHT2, GL_POSITION, light[2].position);
glPopMatrix();
glPushMatrix();
glTranslatef(0.0f, 0.0f, -0.7f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
quadric = gluNewQuadric();
gluSphere(quadric, 0.2f, 80.0f, 80.0f);
glPopMatrix();
light[0].angle += 0.1f;
light[1].angle += 0.1f;
light[2].angle += 0.1f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
A normal is a perpendicular vector to each vertex. If you want to find the normals of any object, which is perpendicular to each side.
In Fixed Function Pipeline we can only give normals for one front (in a qube, one normal for four vertices), while in a programmable pipeline you can define a normal per vertex. This is the difference between FFP and PP.
A vector is something that represents a direction and a length.
Whenever you give a normal, the length doesn't matter, but the direction does.
light O normal | / { source light } |/ surface ----The normal tells us (how much reflection the light causes) what angle the light source is hits. (also for reflections).
1) What OpenGL does for us internally?
a) You give the position of the light. eg: { 100, 100, 100, 1 }
1 is a direction. 0 is a position.
OpenGL will convert this positional vector and store it
b) You give the normal
OpenGL will check the angle of the position vector with the normal and apply dot product internally.
Dot product and cross product is for multiplying two vector.
Cross prod. used to derive the normals to give the perpendicular vector of the two vectors.
Dot prod. give the effect of one vector on another vector (for finding the length).
Dot prod: a x b x cos θ. a = the normal light, b = the source light, theta is angle between the vectors.
This dot product will determine the impact of light on that particular vertex.
Bigger the angle, the impact of light on a object decreases, while smaller angle increases.
Cross prod: formula a x b sin θ
Now lets start implementing the code we need to texture a shpere.
We need to include math.h and add a resource file to hold the textures, so add the following into the include section:
...
#define _USE_MATH_DEFINES 1
#include <math.h>
#include "texture.h"
...
Next we need a function prototype to generate a shpere, and some function prototypes and variables from some previous lessons:
...
void draw_sphere(float, int);
bool load_texture(GLuint*, TCHAR[]);
bool bLight = false;
GLuint texture;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
GLfloat angle;
};
struct Light light[3] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
}
};
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f, 50.0f, 50.0f };
...
We also want to be able to switch the lighting on and off, so we make a user defined switch inside WndProc:
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
}
Then we enable ligthing and texture etc in initialize:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHT2);
glEnable(GL_TEXTURE_2D);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT2, GL_AMBIENT, light[2].ambient);
glLightfv(GL_LIGHT2, GL_SPECULAR, light[2].specular);
glLightfv(GL_LIGHT2, GL_DIFFUSE, light[2].diffuse);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
resize(800, 600);
load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE));
Then we update the draw_sphere function and add the load_texture:
Add a writeup on the formula in draw_sphere the handles applying texturing to the sphere (marked LESSON 38 in the function definition)
Source: https://denigma.app/
- The code is a function that draws a sphere with radius r and center at the origin of the coordinate system.
- The code starts by checking if the radius is less than 0.
- If it is, then the radius will be set to -r. Next, n is checked to see if it's less than 4 or r <= 0.
- If either of these are true, then a triangle will not be drawn and the function returns without drawing anything.
- The next part of the code loops through each side of a triangle that has been drawn in order to draw two more points on top of them using GL_TRIANGLE_STRIP and calculates their coordinates with respect to phi1
and phi2 which are calculated from M_PI * 2 / n for each point where j = 0 and j + 1 respectively.
- Then they're all drawn with GL_TRIANGLE_STRIP before finally being moved back into place by calculating their new coordinates based on sin(theta) * cos(phi1) and sin(theta) * sin(phi1).
void draw_sphere(float r, int n)
{
int i, j;
GLdouble phi1, phi2, theta, s, t;
GLfloat ex, ey, ez;
GLfloat px, py, pz;
if (r < 0) r = -r;
if (n < 0) n = -n;
// Since triangle only has three sides...
if (n < 4 || r <= 0) {
// Calculates the origon of the circle
glBegin(GL_POINTS);
glVertex3f(0.0f, 0.0f, 0.0f);
glEnd();
return;
}
for (j = 0; j < n; j++) {
phi1 = j * M_PI * 2 / n;
phi2 = (j + 1) * M_PI * 2 / n;
// Calculates two more points, and draws it using GL_TRINGLE_STRIP, then moves phi1
glBegin(GL_TRIANGLE_STRIP);
for (i = 0; i <= n; i++) {
theta = i * M_PI / n;
ex = sin(theta) * cos(phi2);
ey = sin(theta) * sin(phi2);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
// LESSON 38
s = phi2 / (M_PI * 2);
t = 1 - (theta / M_PI);
glTexCoord2f(s, t);
glNormal3f(ex, ey, ez);
glVertex3f(px, py, pz);
ex = sin(theta) * cos(phi1);
ey = sin(theta) * sin(phi1);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
// LESSON 38
s = phi1 / (M_PI * 2);
t = 1 - (theta / M_PI);
glTexCoord2f(s, t);
glNormal3f(ex, ey, ez);
glVertex3f(px, py, pz);
}
glEnd();
}
}
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
bitmap = LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
Add the following into display to draw a sphere, with texture (see lesson 22 for details on how to add textures):
...
glTranslatef(0.0f, 0.0f, -3.0f);
static float rotation = 1.0f;
glRotatef(rotation, 0.0f, 1.0f, 1.0f);
rotation += 0.1f;
draw_sphere(1.0f, 60);
glPushMatrix();
gluLookAt(
0.0f, 0.0f, 3.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix();
glRotatef(light[0].angle, 1.0f, 0.0f, 0.0f);
// Updating the position of the light[0], this is startig for the y-direction
light[0].position[1] = light[0].angle;
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glPopMatrix();
glPushMatrix();
glRotatef(light[1].angle, 0.0f, 1.0f, 0.0f);
// Update the light[1] by a rotation, like above
light[1].position[0] = light[1].angle;
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glPopMatrix();
glPopMatrix();
glPushMatrix();
glRotatef(light[2].angle, 0.0f, 0.0f, 1.0f);
light[2].position[0] = light[2].angle;
glLightfv(GL_LIGHT2, GL_POSITION, light[2].position);
glPopMatrix();
glPushMatrix();
glTranslatef(0.0f, 0.0f, -0.7f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glPopMatrix();
light[0].angle += 0.1f;
light[1].angle += 0.1f;
light[2].angle += 0.1f;
...
And just like that we now have a textured sphere with three lighting sources translating over it!
Entire code:
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#define _USE_MATH_DEFINES 1
#include <math.h>
#include "texture.h"
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
// LESSON 38
void draw_sphere(float, int);
bool load_texture(GLuint*, TCHAR[]);
bool bLight = false;
GLuint texture;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
GLfloat angle;
};
struct Light light[3] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
}
};
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f, 50.0f, 50.0f };
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHT2);
glEnable(GL_TEXTURE_2D);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT2, GL_AMBIENT, light[2].ambient);
glLightfv(GL_LIGHT2, GL_SPECULAR, light[2].specular);
glLightfv(GL_LIGHT2, GL_DIFFUSE, light[2].diffuse);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
resize(800, 600);
load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE));
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -3.0f);
static float rotation = 1.0f;
glRotatef(rotation, 0.0f, 1.0f, 1.0f);
rotation += 0.1f;
draw_sphere(1.0f, 60);
glPushMatrix();
gluLookAt(
0.0f, 0.0f, 3.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix();
glRotatef(light[0].angle, 1.0f, 0.0f, 0.0f);
// Updating the position of the light[0], this is startig for the y-direction
light[0].position[1] = light[0].angle;
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glPopMatrix();
glPushMatrix();
glRotatef(light[1].angle, 0.0f, 1.0f, 0.0f);
// Update the light[1] by a rotation, like above
light[1].position[0] = light[1].angle;
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glPopMatrix();
glPopMatrix();
glPushMatrix();
glRotatef(light[2].angle, 0.0f, 0.0f, 1.0f);
light[2].position[0] = light[2].angle;
glLightfv(GL_LIGHT2, GL_POSITION, light[2].position);
glPopMatrix();
glPushMatrix();
glTranslatef(0.0f, 0.0f, -0.7f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glPopMatrix();
light[0].angle += 0.1f;
light[1].angle += 0.1f;
light[2].angle += 0.1f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
void draw_sphere(float r, int n)
{
int i, j;
GLdouble phi1, phi2, theta, s, t;
GLfloat ex, ey, ez;
GLfloat px, py, pz;
if (r < 0) r = -r;
if (n < 0) n = -n;
// Since triangle only has three sides...
if (n < 4 || r <= 0) {
// Calculates the origon of the circle
glBegin(GL_POINTS);
glVertex3f(0.0f, 0.0f, 0.0f);
glEnd();
return;
}
for (j = 0; j < n; j++) {
phi1 = j * M_PI * 2 / n;
phi2 = (j + 1) * M_PI * 2 / n;
// Calculates two more points, and draws it using GL_TRINGLE_STRIP, then moves phi1
glBegin(GL_TRIANGLE_STRIP);
for (i = 0; i <= n; i++) {
theta = i * M_PI / n;
ex = sin(theta) * cos(phi2);
ey = sin(theta) * sin(phi2);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
// LESSON 38
s = phi2 / (M_PI * 2);
t = 1 - (theta / M_PI);
glTexCoord2f(s, t);
glNormal3f(ex, ey, ez);
glVertex3f(px, py, pz);
ex = sin(theta) * cos(phi1);
ey = sin(theta) * sin(phi1);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
// LESSON 38
s = phi1 / (M_PI * 2);
t = 1 - (theta / M_PI);
glTexCoord2f(s, t);
glNormal3f(ex, ey, ez);
glVertex3f(px, py, pz);
}
glEnd();
}
}
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
bitmap = LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
This lesson will focus on adding fog to give your scene some ambience. Let's dive into it!
Begin by adding some global variables to use to set up the fog:
...
bool g_pressed; // Switch variable to enable or disable fog
GLuint fog_mode[] = { // Storage variable for three types of fog
GL_EXP,
GL_EXP2,
GL_LINEAR
};
GLuint fog_filter = 0; // Which filter to use
GLfloat fog_color[4] = { // Color of the fog
0.5f, 0.5f, 0.5f, 1.0f
};
...
Now enable the fog inside your display:
glFogi(GL_FOG_MODE, fog_mode[fog_filter]); // Fog mode - EXP, EXP2 or LINEAR
glFogfv(GL_FOG_COLOR, fog_color); // Fog color
glFogf(GL_FOG_DENSITY, 0.25f); // Fog density
glHint(GL_FOG_HINT, GL_DONT_CARE); // More spesification (how the fog should look...)
glFogf(GL_FOG_START, 1.0f); // Fog application based on the Z-value of the depth buffer
glFogf(GL_FOG_END, 15.0f); // How deep into the Z-axis the fog should be rendered
Lastly we add the switch to turn fog on or off by placing the following in your WndProc:
...
case 'g':
case 'G':
if (g_pressed == false) {
g_pressed = true;
fog_filter += 1; // Increment the fog_filter
if (fog_filter > 2) {
fog_filter = 0; // Resetting the fog_filter
}
glEnable(GL_FOG); // Enabling the internal statemachine of OpenGL
glFogi(GL_FOG_MODE, fog_mode[fog_filter]); // Switch through the various fog_filter
}
break;
...
Compile and run the program, then press g to enable the fog!
Complete code example:
#include <windows.h>
#include <GL/gl.h>
#include <gl/glu.h>
#include <stdbool.h>
#define _USE_MATH_DEFINES 1
#include <math.h>
// #include "texture.h"
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "glu32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
// LESSON 40
bool g_pressed; // Switch to enable / disable fog
GLuint fog_mode[] = { // Storage for three type of fog
GL_EXP,
GL_EXP2,
GL_LINEAR
};
GLuint fog_filter = 0; // Which filter to use
GLfloat fog_color[4] = { // Color of the fog
0.5f, 0.5f, 0.5f, 1.0f
};
// LESSON 38
void draw_sphere(float, int);
bool load_texture(GLuint*, TCHAR[]);
bool bLight = false;
GLuint texture;
struct Light {
GLfloat ambient[4];
GLfloat diffuse[4];
GLfloat specular[4];
GLfloat position[4];
GLfloat angle;
};
struct Light light[3] = {
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.0f, 1.0f },
{ -2.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 1.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
},
{
{ 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f }
}
};
GLfloat material_ambient[] = { 0.0f, 0.0f, 0.0f, 1.0f };
GLfloat material_diffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_specular[] = { 1.0f, 1.0f, 1.0f, 1.0f };
GLfloat material_shininess[] = { 50.0f, 50.0f, 50.0f };
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
case 'l':
case 'L':
if (bLight == false) {
bLight = true;
glEnable(GL_LIGHTING);
}
else {
bLight = false;
glDisable(GL_LIGHTING);
}
break;
case 'g':
case 'G':
if (g_pressed == false) {
g_pressed = true;
fog_filter += 1; // Increment the fog_filter
if (fog_filter > 2) {
fog_filter = 0; // Resetting the fog_filter
}
glEnable(GL_FOG); // Enabling the internal statemachine of OpenGL
glFogi(GL_FOG_MODE, fog_mode[fog_filter]); // Switch through the various fog_filter
}
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glShadeModel(GL_SMOOTH);
glClearDepth(1.0f);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHT2);
glEnable(GL_TEXTURE_2D);
glLightfv(GL_LIGHT0, GL_AMBIENT, light[0].ambient);
glLightfv(GL_LIGHT0, GL_SPECULAR, light[0].specular);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light[0].diffuse);
glLightfv(GL_LIGHT1, GL_AMBIENT, light[1].ambient);
glLightfv(GL_LIGHT1, GL_SPECULAR, light[1].specular);
glLightfv(GL_LIGHT1, GL_DIFFUSE, light[1].diffuse);
glLightfv(GL_LIGHT2, GL_AMBIENT, light[2].ambient);
glLightfv(GL_LIGHT2, GL_SPECULAR, light[2].specular);
glLightfv(GL_LIGHT2, GL_DIFFUSE, light[2].diffuse);
glMaterialfv(GL_FRONT, GL_AMBIENT, material_ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
resize(800, 600);
// load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE));
// LESSON 40
//glFogi(GL_FOG_MODE, fog_mode[fog_filter]);
//glFogfv(GL_FOG_COLOR, fog_color);
//glFogf(GL_FOG_DENSITY, 2.25f);
//glHint(GL_FOG_HINT, GL_DONT_CARE);
//glFogf(GL_FOG_START, 1.0f);
//glFogf(GL_FOG_END, 15.0f);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 40
glFogi(GL_FOG_MODE, fog_mode[fog_filter]); // Fog mode - EXP, EXP2 or LINEAR
glFogfv(GL_FOG_COLOR, fog_color); // Fog color
glFogf(GL_FOG_DENSITY, 0.25f); // Fog density
glHint(GL_FOG_HINT, GL_DONT_CARE); // More spesification (how the fog should look...)
glFogf(GL_FOG_START, 1.0f); // Fog application based on the Z-value of the depth buffer
glFogf(GL_FOG_END, 15.0f); // How deep into the Z-axis the fog should be rendered
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -3.0f);
static float rotation = 1.0f;
glRotatef(rotation, 0.0f, 1.0f, 1.0f);
rotation += 0.1f;
draw_sphere(1.0f, 60);
glPushMatrix();
gluLookAt(
0.0f, 0.0f, 3.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f
);
glPushMatrix();
glRotatef(light[0].angle, 1.0f, 0.0f, 0.0f);
// Updating the position of the light[0], this is startig for the y-direction
light[0].position[1] = light[0].angle;
glLightfv(GL_LIGHT0, GL_POSITION, light[0].position);
glPopMatrix();
glPushMatrix();
glRotatef(light[1].angle, 0.0f, 1.0f, 0.0f);
// Update the light[1] by a rotation, like above
light[1].position[0] = light[1].angle;
glLightfv(GL_LIGHT1, GL_POSITION, light[1].position);
glPopMatrix();
glPopMatrix();
glPushMatrix();
glRotatef(light[2].angle, 0.0f, 0.0f, 1.0f);
light[2].position[0] = light[2].angle;
glLightfv(GL_LIGHT2, GL_POSITION, light[2].position);
glPopMatrix();
glPushMatrix();
glTranslatef(0.0f, 0.0f, -0.7f);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glPopMatrix();
light[0].angle += 0.1f;
light[1].angle += 0.1f;
light[2].angle += 0.1f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
void draw_sphere(float r, int n)
{
int i, j;
GLdouble phi1, phi2, theta, s, t;
GLfloat ex, ey, ez;
GLfloat px, py, pz;
if (r < 0) r = -r;
if (n < 0) n = -n;
// Since triangle only has three sides...
if (n < 4 || r <= 0) {
// Calculates the origon of the circle
glBegin(GL_POINTS);
glVertex3f(0.0f, 0.0f, 0.0f);
glEnd();
return;
}
for (j = 0; j < n; j++) {
phi1 = j * M_PI * 2 / n;
phi2 = (j + 1) * M_PI * 2 / n;
// Calculates two more points, and draws it using GL_TRINGLE_STRIP, then moves phi1
glBegin(GL_TRIANGLE_STRIP);
for (i = 0; i <= n; i++) {
theta = i * M_PI / n;
ex = sin(theta) * cos(phi2);
ey = sin(theta) * sin(phi2);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
// LESSON 38
s = phi2 / (M_PI * 2);
t = 1 - (theta / M_PI);
glTexCoord2f(s, t);
glNormal3f(ex, ey, ez);
glVertex3f(px, py, pz);
ex = sin(theta) * cos(phi1);
ey = sin(theta) * sin(phi1);
ez = cos(theta);
px = r * ex;
py = r * ey;
pz = r * ez;
// LESSON 38
s = phi1 / (M_PI * 2);
t = 1 - (theta / M_PI);
glTexCoord2f(s, t);
glNormal3f(ex, ey, ez);
glVertex3f(px, py, pz);
}
glEnd();
}
}
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
bitmap = LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
To repeatedly load a seamless texture OpenGL uses texture tiling, as demonstrated in the example code beneath:
glBegin(GL_QUADS);
// Texture tiling
glTexCoord2f(50.0f, 50.0f);
glVertex2f(1.0f, 1.0f);
glTexCoord2f(0.0f, 50.0f);
glVertex2f(-1.0f, 1.0f);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(-1.0f, -1.0f);
glTexCoord2f(50.0f, 0.0f);
glVertex2f(1.0f, -1.0f);
glEnd();
A good practice of making drawcalls in OpenGL is to avoid looping, like a for() or while()-loop.
CPU - RAM - HDD - GPU
The program and instructions are loaded from the HDD to the RAM. CPU then handles the instructions that are called (and executes them), the CPU calls the device driver for your GPU which renders in the VRAM (attached to the framebuffer).
There are two types of pipelines: Fixed Functional Pipeline, or a Programmable Pipeline (which is an extension of the FFP adding shaders).
FFP: You can't controll over the data, you just pass it to the pipeline which has predefined stages.
1. Pre-vertex Stage - You pass the vertices to the pipeline
2. Vertex Stage
a. Transformation
a) Position (in x, y and x-axis)
b) Rotation ()
c) Scaling
Our vertices are in the local space.
Everything in Computer Graphcis is Matrix.
a) Local Space will multiply it to Transformation Matrix and this will result in the Model Matrix.
b) Camera - Model matrix will get multiplied with camera (lookAt) matrix and will result in ModelView Space matrix.
c) ModelView Space matrix gets multiplied with the Projection Matrix and will result in Clip Space
3. Post Vertex Stage
a) Primitive Assembly
b) Viewport clipping / mapping
c) Perspective divide → culling
4) Rasterizer Stage
5) Fragment Stage
6) Per Framebuffer tests
a) Pixel Ownership
b) Scissor tests
c) Blend tests
d) Depth tests
e) Logic op
f) Stencil tests
g) Dithering tests
h) Alpha tests
Pipeline has an order. There are two types of rendering; realtime rendering and offline rendering.
Realtime rendering occures in real time and is handled in code, unlike offline rendering, which is pre-rendered and stored in eg. a videoformat and displayed by a video player.
Vertices, Normals, Texture coordinates → added into the pipeline.
Vertices: Is just a point inside the space...
1. Vertex spesification: Here you give the vertices as an input.
The vertex is inside pipeline
a) Transformation (Order doesn't matter here)
1: Translate 2: Rotate 3: Scale
two types - orthographic (bounding box) and perspective (v-shape viewfrustum) - perspective is a (fake) volume in "3D" projected onto a 2D screen.
Orthographic: ()
Matrix: Only way to store the 3D volume. It's basically an array (2D or 3D).
Vertex gets multiplied with the (internal matrix of...) translate, rotate and scale. Here the order matter.
It will result into the world matrix. (The result of the vertex matrix multiplied by the translate, rotate, scale matrix)
Now you define the view (if you don't it gets set to {0, 0, 0}).
World matrix will get multiplied with the camera matrix and will result in the World View Matrix
World View Matrix will get multiplied with the Projection Matrix and will result in the Clip Matrix.
A transformation is the journey of a vertice from it's local space to the clip space.
Local Space → World Space → World View Space → Clip Space
All this is the work of the VERTEX SHADER.
Difference between legacy and modern OpenGL is the introduction of shaders and programmable pipelines.
Rendering flow is the transformation of vertices *MUST / textures coords / normals (attributes - basic raw data passed to the pipeline).
- You are passing attributes to the Pipeline
Attributes - Vertices, Normals, Color, Texture Coordinates
RULES :- There are two compulsary shaders - Vertex and Fragment
a) All the data is passed the Vertex Shader
Shader is a program that runs on the GPU (why render on GPU over the CPU)
Every shader has a spesific task or motive
Vertex shaders runs PER Vertex
If there are three vertices to render a Trinagle. It will run three times.
b) Usng that data, Vertex shader will preform Transformation
There are three types of transformation - Translate, Rotate and Scale
These Translate, Rotate and Scale are the Matrices.
Why Matrix ? Represent 3D data {into Matrix Container}[...].
Vertices which you have passed to the vertex shader gets multiplied with Transformation Matrix
and will result in Model Space
vPosition * translateMatrix = ModelSpace;
CameraMatrix * ModelSpace = Model-View Matrix;
(Define Orthographic or Projection )
ProjectionMatrix * Model-View Matrix = Clip Space;
Clip Space is the return type or value of the VERTEX SHADER.
CPU (5 cores)
+-----------+
|+---------+|
|| ++ ++ ||
|| ++ ++ ||
|+---------+|
+-----------+
GPU (1920 cores)
+-----------+
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
+-----------+
By using the 'in' keyword
Example - in vec3 vPosition;
a) Primitive Assembly ? - What geometry do you wish to render? Eg. Triangle, Line, Point, LineLoop, LineStrip
b) Viewport Clipping ? - See illustration below (Viewport clipping)
c) Perspective Divide ? - What Homogenous coordinates? Cartesian coordinates (x, y, z axis bound)
d) Face Culling ? - Back-face culling (disable or enable to avoid rendering unecessary objects)
[2,3]
x = 2 and y = 3 (Cartesian)
[2,3,w] where w can be either 0 or 1
Conversion of cartesion to homogenous
x/w and y/w
2/1 and 3/1 = [2,3]
what if w = 0?
2/0 and 3/0 = inifinity
All this is frustum calculation...
+---------------+
| |
| |
| /\ |
+-----/..\------|
/____\
Viewport clipping
Create potential pixels... this stage isn't programmable!
Shaders are: GLSL is a C-based programming language to handle vertices and transfer it into the pipeline.
The viewing matrix is considered the view projection. Two types - orthographic and projection
Vertex shader returns the clip space (keyword is gl_Position, available even to the Fragment shaders)...
Post Vertex Stage: primitive assembly - viewport clipping(discarding object outside of the viewporr)
Perspective divide (homogenous coordinate (adds the w) → cartesian coordinate).
Face Culling (Hints of- or remove "hidden" sides). It's an optimisation in the renderer.
Rasterizer: Creates a potential pixel. The "brain" or logic where all primitives are rendered.
Fragment Shader Stage: whatever pixels are in the Fragment shader is colored by this stage.
Vertex Shader: runs once pr vertex, Fragment Shader: runs once pr fragment
Main job for Fragment Shader is to color each pixel.
Two optional shaders: Geometry shader and Tesselation shader.
Geometry Shader (creates multiple objects out of a single object).
Tesselation Shader: (add or increase details to a object (stiching...))
PER FRAGMENT STAGE (PER SAMPLE PROCESSING)
1) SCISSOR TEST
+---------------------------+
| +--+ . |
| +--+ /\ |
+---------------------------+
Inside a viewport you can cull out an object if needed.
2) DEPTH TEST (happens inside the pipeline)
Testing the z-values on fragments
3) PIXEL OWNERSHIP TEST
If a pixel is hidden by a window the Pixel Ownership Test will fail this stage
4) BLENDING TEST
Alpha blending is handled in the pipeline (front object is transparemt- objects behind should be visible)
5) LOGICAL OPERATIONS
All the bitwise operations are handled by the pipeline
6) DITHERING TEST (WRITE MASK)
Image processing algorithm to intenionally apply noise (handled by OpenGL internatlly.
7) STENCIL TEST
We'll get back to this stage (advanced ...)
If a pixel returns from all these stages it is added to the Framebuffer. The content of the framebuffer is displayed to the screen.
To initialize a modern OpenGL context you need glew.h (GL Extension Wrangler Lib). It's open platform and released by the Khronos Foundation.
First we have to download the library (https://glew.sourceforge.net/)
Add the include and lib folder in setup (TODO) and copy the DLL into the folder... #include
There are two types of pipelines, rasterization → DEFine: you pass a set of vertices, then processed by the rasterizer and put into the framebuffer.
Rasterization is the journey from the fragment to the final pixel! You need vertices to render...
You define some vertices, which will be rasterized and render to screen.
the second thing OpenGL support is called global illumination (volumetric rendering). Here you don't have any vertices, (ray-tracing, ray-marching, path-tracing) → it has it's another way to render (no vertices), but can render multiple thing.
modern pipeline supports both types (first rasterization pipeline works).
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
SetWindowTextA(g_hwnd, glGetString(GL_VERSION));
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
https://github.com/alberto-paparella/SimpleFirstPersonGame/tree/main/src
Everything in OpenGL previous lessons we used the legacy pipeline. To get a modern extensions that comes with the modern pipeline, we use glew and get the EXT for modern context.
We don't have to change the rendering context, because we use the OS calls, so all we need is the modern funtions that comes with OGL. (New extensions)
Khronos maintains OpenGL for the desktop, for android it's called OpenGL ES (Embedded systems). The latests version is OpenGL ES3.2
In the browser it's called WebGL, which is OpenGL ES internally. WebGL2 is locked to OGL ES 3.0. When OpenGL ES 3.1 (same as OGL 4.2) was released it came with the tesselation shader.
When OpenGL ES 3.2 was released it introduced the geometry shader was introduced.
WebGL only supports vert and frag, but not tess and geo shaders.
OpenGL has three versions, OGL, OGL ES and WebGL for desktop, mobile and browser.
OpenGL was supported for Mac until 18 (19?), but it was swapped for their own technology to support Metal, but it's availble up to version 4.1 CHECK IT OUT (with warning).
All of these OpenGL version uses the shader language GLSL (Graphic Language Shader Language).
There are two types of shader, one is outside of the program and one is inside. Industry standard is outside the program, while for home use inside the program will work just fine.
A shader is a (small C-)program that runs on the GPU. Every shader has it's own spesific task to do. Each shader perferms it's own task. Vert = pr vertex. if you send in 10k vert the vert shader will run 10k times.
What we seing on the screen is from the rasterization pipeline (creation of meshes to screen).
You pass data to the shader which processes it and passes it into the next stage of the pipeline.
You pass the data from one shader to the next using the in keyword.
Every shader also has it's own return type. The type depends on what the shader does
Every shader will it's own main(), similar to the applications main-function.|
You pass some data to the shader, it will process it and pass it back into the pipeline.
// important in any shader (version has to match between each shader that's defined)
version core 460
in vec3 position;
void main()
{
// some computation related to position
gl_Position = // someCalculations
}
Not no return-keyword, gl_Position is handling that for us.
version core 460
in vec3 color;
out vec3 out_color,
void main()
{
// color = vec(1.0f, 1.0f, 1.0f);
out_color = color;
}
Frag shader used to have a keyword for frag color, but it's deprecated.
These shaders are compiled at runtime, so it's compiled when the appllication is put in a RAM. We'll write our own Shader debugger to handle this (shaders are difficult to debug).
There are two types of content in a shader, first one is attr, second is uniforms. Attributes are fixed once it's passed to the shaders. Uniforms can be updated dynimically without any changes.
Both have spesific usecases: attr can be verts, normals, tex coords. These are only sent once to the shader.
Uniforms can be any data, ex if you want change your color you pass a var as uniform. These are sent every frame.
Shader is a program that runs on the GPU. Every shader has it's work spesified. GLSL is a C like language with it's own keywords. GLSL is the shader language.
Inside the shader you have the attr (verts, norms, texcoords) and the uniforms (anything).
Setting up a modern OpenGL program by adding a shader program object in the global space and adding the following into your initialize() funtion:
...
GLuint shader_program_obj;
...
bool initialize()
{
...
// LESSON 48 (You can write multiple vs and fs shaders)
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450" \
"\n" \
"void main()" \
"{" \
"" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
const GLchar* fragment_shader = "#version 450" \
"\n" \
"void main()" \
"{" \
"" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
glLinkProgram(shader_program_obj);
...
}
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-SDK");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
SetWindowTextA(g_hwnd, glGetString(GL_VERSION));
// LESSON 48 (You can write multiple vs and fs shaders)
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450" \
"\n" \
"void main()" \
"{" \
"" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
const GLchar* fragment_shader = "#version 450" \
"\n" \
"void main()" \
"{" \
"" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
glLinkProgram(shader_program_obj);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
include vmath, change main.c to main.cpp,
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
#pragma comment(lib, "openal32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
vmath::mat4 perspective_projection_matrix;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// SetWindowTextA(g_hwnd, glGetString(GL_VERSION));
// LESSON 48 (You can write multiple vs and fs shaders)
// LESSON 49 ()
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec4 vpos;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vpos;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(0.0, 0.0, 1.0, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 49
const GLfloat triangleVertices[] =
{
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 1.0f, 100.0f );
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
GFX pippeline(Verts, Normals, Tex) → VS → FS
There are two places you can send the data. First one is attributes (They are data that cant be changes, eg. drawdata (rawdata = attr)) You pass this as the pipeline is initialized.
The second on is the uniforms (passed on runtime, eg Color of the geo (in a shader)).
VERTEX SHADER
in vec4 vPos;
in vec2 vTex;
in vec3 vNormal;
void main()
{
// write the logic to convert raw data accordingly
}
You are binding (creating a pipe) between the CPU and GPU by sharing the memaddr (pointer). (Add the GL calls you need to set things up)
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
vmath::mat4 perspective_projection_matrix;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// SetWindowTextA(g_hwnd, glGetString(GL_VERSION));
// LESSON 48 (You can write multiple vs and fs shaders)
// LESSON 49 ()
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec4 vpos;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vpos;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(0.0, 0.0, 1.0, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 49
const GLfloat triangleVertices[] =
{
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
// Creating a memlocation on the CPU storing the GPU mem addr in the variable
glGenVertexArrays(1, &vao_triangle);
// Bind this instance to a state
glBindVertexArray(vao_triangle);
// Creating a subbuffer
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
// Push the data from the CPU to the GPU (via it's pointer)
glEnableVertexAttribArray(POSITION);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 50 homework
shader_program_obj = glCreateProgram();
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 1.0f, 100.0f );
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
glDrawArrays(GL_TRIANGLES, 0, 3);
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Continuation of previous setup by adding color to the triangle
void glBufferData(enum target, sizeiptr size, const void *data, enum usage) is a muteable object so the dereferencing by glBufferData(target, sizeof(target), NULL, NULL) will work
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// LESSON 49
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec4 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vpos;" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(outColor, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f
};
// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
// Creating a memlocation on the CPU storing the GPU mem addr in the variable
glGenVertexArrays(1, &vao_triangle);
// Bind this instance to a state
glBindVertexArray(vao_triangle);
// Creating a subbuffer
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
// Push the data from the CPU to the GPU (via it's pointer)
glEnableVertexAttribArray(POSITION);
// LESSON 51
//glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
glDrawArrays(GL_TRIANGLES, 0, 3);
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Continuation from previous session
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// LESSON 49
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec4 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vpos;" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(outColor, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f
};
// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
// Creating a memlocation on the CPU storing the GPU mem addr in the variable
glGenVertexArrays(1, &vao_triangle);
// Bind this instance to a state
glBindVertexArray(vao_triangle);
// Creating a subbuffer
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
// Push the data from the CPU to the GPU (via it's pointer)
glEnableVertexAttribArray(POSITION);
// LESSON 51
//glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
glDrawArrays(GL_TRIANGLES, 0, 3);
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Continuation from previous session:
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows" /*/entry:mainCRTStartup*/)
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// LESSON 49
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec4 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vpos;" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(outColor, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
// Creating a memlocation on the CPU storing the GPU mem addr in the variable
glGenVertexArrays(1, &vao_triangle);
// Bind this instance to a state
glBindVertexArray(vao_triangle);
// Creating a subbuffer
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
// Push the data from the CPU to the GPU (via it's pointer)
glEnableVertexAttribArray(POSITION);
// LESSON 51
//glBindBuffer(GL_ARRAY_BUFFER, 0);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
glDrawArrays(GL_TRIANGLES, 0, 3);
// LESSON 53
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
glBindVertexArray(vao_triangle);
glDrawArrays(GL_TRIANGLES, 0, 3);
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Continuation from previous lesson
To draw multiple, different objects at once you have to remember to bind and enable, then unbind the buffers that is used.
#include <windows.h>
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(outColor, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f
};
//// Creating a memlocation on the CPU storing the GPU mem addr in the variable
//glGenVertexArrays(1, &vao_triangle);
//// Bind this instance to a state
//glBindVertexArray(vao_triangle);
//// Creating a subbuffer
//glGenBuffers(1, &vbo_position_triangle);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
//// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
//glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
//// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
//glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//// Push the data from the CPU to the GPU (via it's pointer)
//glEnableVertexAttribArray(POSITION);
// LESSON 51
// glBindBuffer(GL_ARRAY_BUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_square_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_square_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareColor), squareColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// LESSON 54
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 54
glBindVertexArray(vao_triangle);
glDrawArrays(GL_TRIANGLE_FAN, 0, 3);
// LESSON 54
glBindVertexArray(0);
// LESSON 53
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
glBindVertexArray(vao_square);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// LESSON 54
glBindVertexArray(0);
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Continuation of previous lessons by adding 3D objects and adding colors to each side.
#ifdef _WIN32
#include <windows.h>
#endif
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"void main()" \
"{" \
" fragColor = vec4(outColor, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
// LESSON 55
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
// LESSON 55
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective triangle (Right face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Back face)
0.0f, 1.0f, 0.0f, // Apex
1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, -1.0f, // Right bottom
// Perspective triangle (Left face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
// LESSON 55
// Perspective square (Top face)
1.0f, 1.0f, -1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, 1.0f, 1.0f, // Left bottom
1.0f, 1.0f, 1.0f, // Right bottom
// Perspective square (Bottom face)
1.0f, -1.0f, -1.0f, // Right top
-1.0f, -1.0f, -1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Back face)
1.0f, 1.0f, -1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Right face)
1.0f, 1.0f, -1.0f, // Right top
1.0f, 1.0f, 1.0f, // Left top
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Left face)
-1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f // Right bottom
};
//// Creating a memlocation on the CPU storing the GPU mem addr in the variable
//glGenVertexArrays(1, &vao_triangle);
//// Bind this instance to a state
//glBindVertexArray(vao_triangle);
//// Creating a subbuffer
//glGenBuffers(1, &vbo_position_triangle);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
//// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
//glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
//// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
//glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//// Push the data from the CPU to the GPU (via it's pointer)
//glEnableVertexAttribArray(POSITION);
// LESSON 51
// glBindBuffer(GL_ARRAY_BUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_square_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_square_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareColor), squareColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// LESSON 54
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
// LESSON 55
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
// LESSON 55
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 54
glBindVertexArray(vao_triangle);
// LESSON 55
glDrawArrays(GL_TRIANGLE_FAN, 0, 3*4);
// LESSON 54
glBindVertexArray(0);
// LESSON 53
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
glBindVertexArray(vao_square);
// LESSON 55 (This renders each side of the square individually)
//glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 4, 4); // Second param is the stride, third is amount of vertices (MAX = 4)
//glDrawArrays(GL_TRIANGLE_FAN, 8, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 12, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 16, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 20, 4);
// LESSON 55 (drawing a cude in one)
for (int i = 0; i < 21; i = i + 4)
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
// LESSON 54
glBindVertexArray(0);
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
There are two types of ways to pass the data to GPU - as we've seen attributes, the other way is via uniforms. Attributes can only be sent once, while uniforms can be sent on the fly (set in the display).
Attributes are static and uniforms are at updated on request, minumum once pr frame. Uniforms are also accessable to all shaders.
Always remember to bind the attributes before linking the program, uniforms after!
...
GLuint color_uniform;
int color_type = 0;
...
initialixe()
...
// Attributes
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
glBindAttribLocation(shader_program_obj, COLOR, "color");
// Linking program
glLinkProgram(shader_program_obj);
// Uniforms
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
color_uniform = glGetUniformLocation(shader_program_obj, "color");
...
initialize() -> fragment shader:
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"uniform vec3 color;"
"void main()" \
"{" \
" fragColor = vec4(color, 1.0);" \
"}";
...
color_uniform = glGetUniformLocation(shader_program_obj, "color");
...
display()
...
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 56
glUniform3f(color_uniform, 1.0f, 0.0f, 0.0f);
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 56
glUniform3f(color_uniform, 1.0f, 0.0f, 0.0f);
...
You can rebind uniforms before each object to change the value.
WM_CHAR doesn't work with the numpad keys, so change the type to WM_KEYDOWN
If you want to pass something to the GPU dynamically use the uniforms!
#ifdef _WIN32
#include <windows.h>
#endif
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
// LESSon 56
GLuint color_uniform;
int color_type = 0;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
case WM_KEYDOWN:
switch (wParam)
{
// LESSON 56
case VK_NUMPAD0:
color_type = 0;
break;
case VK_NUMPAD1:
color_type = 1;
break;
case VK_NUMPAD2:
color_type = 2;
break;
case VK_NUMPAD3:
color_type = 3;
break;
case VK_NUMPAD4:
color_type = 4;
break;
case VK_NUMPAD5:
color_type = 5;
break;
// NUMPAD6 and case 'f' have the same ASCII value
/*case VK_NUMPAD6:
color_type = 6;
break;*/
case VK_NUMPAD7:
color_type = 7;
break;
case VK_NUMPAD8:
color_type = 8;
break;
case VK_NUMPAD9:
color_type = 9;
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"uniform vec3 color;"
"void main()" \
"{" \
" fragColor = vec4(color, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 56
color_uniform = glGetUniformLocation(shader_program_obj, "color");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f
};
//// Creating a memlocation on the CPU storing the GPU mem addr in the variable
//glGenVertexArrays(1, &vao_triangle);
//// Bind this instance to a state
//glBindVertexArray(vao_triangle);
//// Creating a subbuffer
//glGenBuffers(1, &vbo_position_triangle);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
//// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
//glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
//// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
//glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//// Push the data from the CPU to the GPU (via it's pointer)
//glEnableVertexAttribArray(POSITION);
// LESSON 51
// glBindBuffer(GL_ARRAY_BUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_square_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_square_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareColor), squareColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// LESSON 54
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
// LESSON 55
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
// LESSON 55
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 56
if (color_type == 0) {
glUniform3f(color_uniform, 1.0f, 0.0f, 0.0f);
}
else if (color_type == 1) {
glUniform3f(color_uniform, 0.0f, 1.0f, 0.0f);
}
else if (color_type == 2) {
glUniform3f(color_uniform, 0.0f, 0.0f, 1.0f);
}
else if (color_type == 3) {
glUniform3f(color_uniform, 1.0f, 1.0f, 0.0f);
}
else if (color_type == 4) {
glUniform3f(color_uniform, 0.0f, 1.0f, 1.0f);
}
else if (color_type == 5) {
glUniform3f(color_uniform, 1.0f, 0.0f, 1.0f);
}
else if (color_type == 7) {
glUniform3f(color_uniform, 1.0f, 1.0f, 1.0f);
}
else if (color_type == 8) {
glUniform3f(color_uniform, 0.5f, 1.0f, 0.2f);
}
else if (color_type == 9) {
glUniform3f(color_uniform, 1.0f, 0.7f, 4.0f);
}
// LESSON 54
glBindVertexArray(vao_triangle);
// LESSON 55
glDrawArrays(GL_TRIANGLE_FAN, 0, 3);
// LESSON 54
glBindVertexArray(0);
// LESSON 53
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 56
if (color_type == 0) {
glUniform3f(color_uniform, 1.0f, 0.0f, 0.0f);
}
else if (color_type == 1) {
glUniform3f(color_uniform, 0.0f, 1.0f, 0.0f);
}
else if (color_type == 2) {
glUniform3f(color_uniform, 0.0f, 0.0f, 1.0f);
}
else if (color_type == 3) {
glUniform3f(color_uniform, 1.0f, 1.0f, 0.0f);
}
else if (color_type == 4) {
glUniform3f(color_uniform, 0.0f, 1.0f, 1.0f);
}
else if (color_type == 5) {
glUniform3f(color_uniform, 1.0f, 0.0f, 1.0f);
}
else if (color_type == 7) {
glUniform3f(color_uniform, 1.0f, 1.0f, 1.0f);
}
else if (color_type == 8) {
glUniform3f(color_uniform, 0.5f, 1.0f, 0.2f);
}
else if (color_type == 9) {
glUniform3f(color_uniform, 1.0f, 0.7f, 4.0f);
}
glBindVertexArray(vao_square);
// LESSON 55 (This renders each side of the square individually)
//glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 4, 4); // Second param is the stride, third is amount of vertices (MAX = 4)
//glDrawArrays(GL_TRIANGLE_FAN, 8, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 12, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 16, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 20, 4);
// LESSON 55 (drawing a cude in one)
for (int i = 0; i < 21; i = i + 4)
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
// LESSON 54
glBindVertexArray(0);
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
Shaders execute in parallell... With the invention of GPUs a task that would take 24h on a CPU, the same task would take 24 sec on a GPU,where there shaders run.
Since we have 1000s of cores on the GPU, conditional statements should not be run on GPU.
This lesson applies a
DETAIL HOW TO ADD A UNIFORM
#ifdef _WIN32
#include <windows.h>
#endif
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
void uninitialize(void);
void toggle_fullscreen(void);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
// LESSON 56
GLuint color_uniform;
int color_type = 0;
// LESSON 57
GLuint bRed;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
case WM_KEYDOWN:
switch (wParam)
{
// LESSON 56
case VK_NUMPAD0:
color_type = 0;
break;
case VK_NUMPAD1:
color_type = 1;
break;
case VK_NUMPAD2:
color_type = 2;
break;
case VK_NUMPAD3:
color_type = 3;
break;
case VK_NUMPAD4:
color_type = 4;
break;
case VK_NUMPAD5:
color_type = 5;
break;
// NUMPAD6 and case 'f' have the same ASCII value
case VK_NUMPAD6:
color_type = 6;
break;
case VK_NUMPAD7:
color_type = 7;
break;
case VK_NUMPAD8:
color_type = 8;
break;
case VK_NUMPAD9:
color_type = 9;
break;
case VK_ESCAPE:
uninitialize();
PostQuitMessage(0);
break;
}
break;
case WM_SIZE:
resize(LOWORD(lParam), HIWORD(lParam));
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"in vec3 outColor;" \
"out vec4 fragColor;" \
"uniform vec3 color;" \
"uniform int bRed;" \
"void main()" \
"{" \
" if (bRed == 1)" \
" fragColor = vec4(1.0, 0.0, 0.0, 1.0);" \
" else " \
" fragColor = vec4(0.0, 1.0, 0.0, 1.0);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 56
color_uniform = glGetUniformLocation(shader_program_obj, "color");
// LESSON 57
bRed = glGetUniformLocation(shader_program_obj, "bRed");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
-1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
// Perspective square (Bottom face)
1.0f, -1.0f, -1.0f, // Right top
-1.0f, -1.0f, -1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Back face)
1.0f, 1.0f, -1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Right face)
1.0f, 1.0f, -1.0f, // Right top
1.0f, 1.0f, 1.0f, // Left top
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Left face)
-1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f // Right bottom
};
//// Creating a memlocation on the CPU storing the GPU mem addr in the variable
//glGenVertexArrays(1, &vao_triangle);
//// Bind this instance to a state
//glBindVertexArray(vao_triangle);
//// Creating a subbuffer
//glGenBuffers(1, &vbo_position_triangle);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
//// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
//glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
//// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
//glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//// Push the data from the CPU to the GPU (via it's pointer)
//glEnableVertexAttribArray(POSITION);
// LESSON 51
// glBindBuffer(GL_ARRAY_BUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_square_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_square_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareColor), squareColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// LESSON 54
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
// LESSON 55
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
// LESSON 55
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 57
glUniform1i(bRed, 1);
// LESSON 56
if (color_type == 0) {
glUniform3f(color_uniform, 1.0f, 0.0f, 0.0f);
}
else if (color_type == 1) {
glUniform3f(color_uniform, 0.0f, 1.0f, 0.0f);
}
else if (color_type == 2) {
glUniform3f(color_uniform, 0.0f, 0.0f, 1.0f);
}
else if (color_type == 3) {
glUniform3f(color_uniform, 1.0f, 1.0f, 0.0f);
}
else if (color_type == 4) {
glUniform3f(color_uniform, 0.0f, 1.0f, 1.0f);
}
else if (color_type == 5) {
glUniform3f(color_uniform, 1.0f, 0.0f, 1.0f);
}
else if (color_type == 6) {
glUniform3f(color_uniform, 1.0f, 0.7f, 0.1f);
}
else if (color_type == 7) {
glUniform3f(color_uniform, 1.0f, 1.0f, 1.0f);
}
else if (color_type == 8) {
glUniform3f(color_uniform, 0.5f, 1.0f, 0.2f);
}
else if (color_type == 9) {
glUniform3f(color_uniform, 1.0f, 0.7f, 4.0f);
}
// LESSON 54
glBindVertexArray(vao_triangle);
// LESSON 55
glDrawArrays(GL_TRIANGLE_FAN, 0, 3*4);
// LESSON 54
glBindVertexArray(0);
// LESSON 53
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 57
glUniform1i(bRed, 0);
// LESSON 56
if (color_type == 0) {
glUniform3f(color_uniform, 1.0f, 0.0f, 0.0f);
}
else if (color_type == 1) {
glUniform3f(color_uniform, 0.0f, 1.0f, 0.0f);
}
else if (color_type == 2) {
glUniform3f(color_uniform, 0.0f, 0.0f, 1.0f);
}
else if (color_type == 3) {
glUniform3f(color_uniform, 1.0f, 1.0f, 0.0f);
}
else if (color_type == 4) {
glUniform3f(color_uniform, 0.0f, 1.0f, 1.0f);
}
else if (color_type == 5) {
glUniform3f(color_uniform, 1.0f, 0.0f, 1.0f);
}
else if (color_type == 6) {
glUniform3f(color_uniform, 1.0f, 0.7f, 0.1f);
}
else if (color_type == 7) {
glUniform3f(color_uniform, 1.0f, 1.0f, 1.0f);
}
else if (color_type == 8) {
glUniform3f(color_uniform, 0.5f, 1.0f, 0.2f);
}
else if (color_type == 9) {
glUniform3f(color_uniform, 1.0f, 0.7f, 4.0f);
}
glBindVertexArray(vao_square);
// LESSON 55 (This renders each side of the square individually)
//glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 4, 4); // Second param is the stride, third is amount of vertices (MAX = 4)
//glDrawArrays(GL_TRIANGLE_FAN, 8, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 12, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 16, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 20, 4);
// LESSON 55 (drawing a cude in one)
for (int i = 0; i < 21; i = i + 4)
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
// LESSON 54
glBindVertexArray(0);
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
#ifdef _WIN32
#include <windows.h>
#endif
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#include "texture.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
// LESSON 22
void uninitialize(void);
void toggle_fullscreen(void);
bool load_texture(GLuint*, TCHAR[]);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 22
GLuint texture;
// LESSON 58
GLuint sampler_uniform;
GLuint vbo_tex_triangle;
GLuint vbo_tex_square;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
// LESSON 58
TEXTURE = 2,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
// LESSON 56
GLuint color_uniform;
int color_type = 0;
// LESSON 57
GLuint bRed;
int width = 800;
int height = 600;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
case WM_KEYDOWN:
switch (wParam)
{
// LESSON 57
case VK_NUMPAD0:
// Full screen
glViewport(0, 0, (GLsizei)width, (GLsizei)height);
break;
case VK_NUMPAD1:
// Lower left corner
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD2:
// Lower right corner
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD3:
// Upper left corner
glViewport(0, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD4:
// Upper right corner
glViewport((GLsizei)width / 2, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD5:
// Whole right side
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD6:
// Whole left side
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD7:
// Whole upper half
glViewport(0, (GLsizei)height / 2, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD8:
// Whole lower half
glViewport(0, 0, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD9:
// Centered
glViewport((GLsizei)width / 4, (GLsizei)height / 4, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_ESCAPE:
uninitialize();
PostQuitMessage(0);
break;
}
break;
case WM_SIZE:
// resize(LOWORD(lParam), HIWORD(lParam));
width = LOWORD(lParam);
height = LOWORD(lParam);
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"in vec2 tex;" \
"out vec2 outTex;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
" outTex = tex;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"in vec3 outColor;" \
"in vec2 outTex;" \
"uniform sampler2D u_sampler;" \
"out vec4 fragColor;" \
"uniform vec3 color;" \
"uniform int bRed;" \
"void main()" \
"{" \
/*" if (bRed == 1)" \
" fragColor = vec4(1.0, 0.0, 0.0, 1.0);" \
" else " \
" fragColor = vec4(0.0, 1.0, 0.0, 1.0);" \*/
" fragColor = texture(u_sampler, outTex);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
// LESSON 58
glBindAttribLocation(shader_program_obj, TEXTURE, "tex");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 56
color_uniform = glGetUniformLocation(shader_program_obj, "color");
// LESSON 57
bRed = glGetUniformLocation(shader_program_obj, "bRed");
// LESSON 58
sampler_uniform = glGetUniformLocation(shader_program_obj, "u_sampler");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
-1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
//// Perspective square (Bottom face)
//1.0f, -1.0f, -1.0f, // Right top
//-1.0f, -1.0f, -1.0f, // Left top
//-1.0f, -1.0f, 1.0f, // Left bottom
//1.0f, -1.0f, 1.0f, // Right bottom
//// Perspective square (Front face)
//1.0f, 1.0f, 1.0f, // Right top
//-1.0f, 1.0f, 1.0f, // Left top
//-1.0f, -1.0f, 1.0f, // Left bottom
//1.0f, -1.0f, 1.0f, // Right bottom
//// Perspective square (Back face)
//1.0f, 1.0f, -1.0f, // Right top
//-1.0f, 1.0f, -1.0f, // Left top
//-1.0f, -1.0f, -1.0f, // Left bottom
//1.0f, -1.0f, -1.0f, // Right bottom
//// Perspective square (Right face)
//1.0f, 1.0f, -1.0f, // Right top
//1.0f, 1.0f, 1.0f, // Left top
//1.0f, -1.0f, 1.0f, // Left bottom
//1.0f, -1.0f, -1.0f, // Right bottom
//// Perspective square (Left face)
//-1.0f, 1.0f, 1.0f, // Right top
//-1.0f, 1.0f, -1.0f, // Left top
//-1.0f, -1.0f, -1.0f, // Left bottom
//-1.0f, -1.0f, 1.0f // Right bottom
};
// LESSON 58
const GLfloat textureUVs[] = {
/*0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f*/
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
//// Creating a memlocation on the CPU storing the GPU mem addr in the variable
//glGenVertexArrays(1, &vao_triangle);
//// Bind this instance to a state
//glBindVertexArray(vao_triangle);
//// Creating a subbuffer
//glGenBuffers(1, &vbo_position_triangle);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
//// First param: pointer to a datatype, second is size of data, 3rd: what data is sent, 4th: draw command
//glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
//// Once a buffer has been created -> take this position data and 2nd: vert shaders -> 3 parts each time, 3rd: data type of the data 4th: no ratation, 5th: TBA 6th: TBA
//glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//// Push the data from the CPU to the GPU (via it's pointer)
//glEnableVertexAttribArray(POSITION);
// LESSON 51
// glBindBuffer(GL_ARRAY_BUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
/*glGenBuffers(1, &vbo_square_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_square_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareColor), squareColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);*/
/*glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);*/
// LESSON 54
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
glGenBuffers(1, &vbo_triangle_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle_color);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleColor), triangleColor, GL_STATIC_DRAW);
glVertexAttribPointer(COLOR, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(COLOR);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// LESSON 58
glGenVertexArrays(1, &vbo_tex_triangle);
glBindVertexArray(vbo_tex_triangle);
glGenBuffers(1, &vbo_tex_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_tex_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(textureUVs), textureUVs, GL_STATIC_DRAW);
glVertexAttribPointer(TEXTURE, 2, GL_FLOAT, GL_FALSE, 0, NULL); // 2nd param: 2 bec UVs
glEnableVertexAttribArray(TEXTURE);
glBindVertexArray(0); // LESSON 58: moved down here
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
// LESSON 55
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// LESSON 22
glEnable(GL_TEXTURE_2D);
load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE));
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
// LESSON 55
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 57
glUniform1i(bRed, 1);
// LESSON 58
glActiveTexture(GL_TEXTURE0);
glUniform1i(sampler_uniform, 0);
/*
one thing is missing from loading a texture? can you guess what?
*/
// LESSON 54
glBindVertexArray(vao_triangle);
// LESSON 55
glDrawArrays(GL_TRIANGLE_STRIP, 0, 3*4);
// LESSON 54
glBindVertexArray(0);
// LESSON 53
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 57
glUniform1i(bRed, 0);
glBindVertexArray(vao_square);
// LESSON 55 (This renders each side of the square individually)
//glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 4, 4); // Second param is the stride, third is amount of vertices (MAX = 4)
//glDrawArrays(GL_TRIANGLE_FAN, 8, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 12, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 16, 4);
//glDrawArrays(GL_TRIANGLE_FAN, 20, 4);
// LESSON 55 (drawing a cude in one)
for (int i = 0; i < 21; i = i + 4)
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
// LESSON 54
glBindVertexArray(0);
angle += 0.05f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
// LESSON 58
bitmap = (HBITMAP)LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
//gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
// LESSON 58
// 2nd param: mipmap levels (by 2s),
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, bmp.bmWidth, bmp.bmHeight, 0, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
glGenerateMipmap(GL_TEXTURE_2D);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
A shaders has a keyword sampler2D that is a GLSL uniform keyword, taking the image you want to send to the shader (once for each pixel 1080p = 1080x)
// Added texture tiling to the frag shader " fragColor = texture(u_sampler, outTex * 2.0);" \ Use texture tiling instead of streching the text
#ifdef _WIN32
#include <windows.h>
#endif
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#include "texture.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
// LESSON 22
void uninitialize(void);
void toggle_fullscreen(void);
bool load_texture(GLuint*, TCHAR[]);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 22
GLuint texture;
// LESSON 58
GLuint sampler_uniform;
GLuint vbo_tex_triangle;
GLuint vbo_tex_square;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
// LESSON 58
TEXTURE = 2,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
// LESSON 56
GLuint color_uniform;
int color_type = 0;
// LESSON 57
GLuint bRed;
int width = 800;
int height = 600;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
case WM_KEYDOWN:
switch (wParam)
{
// LESSON 57
case VK_NUMPAD0:
// Full screen
glViewport(0, 0, (GLsizei)width, (GLsizei)height);
break;
case VK_NUMPAD1:
// Lower left corner
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD2:
// Lower right corner
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD3:
// Upper left corner
glViewport(0, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD4:
// Upper right corner
glViewport((GLsizei)width / 2, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD5:
// Whole right side
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD6:
// Whole left side
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD7:
// Whole upper half
glViewport(0, (GLsizei)height / 2, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD8:
// Whole lower half
glViewport(0, 0, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD9:
// Centered
glViewport((GLsizei)width / 4, (GLsizei)height / 4, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_ESCAPE:
uninitialize();
PostQuitMessage(0);
break;
}
break;
case WM_SIZE:
// resize(LOWORD(lParam), HIWORD(lParam));
width = LOWORD(lParam);
height = LOWORD(lParam);
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"in vec2 tex;" \
"out vec2 outTex;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
" outTex = tex;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"in vec3 outColor;" \
"in vec2 outTex;" \
"uniform sampler2D u_sampler;" \
"out vec4 fragColor;" \
"uniform vec3 color;" \
"uniform int bRed;" \
"void main()" \
"{" \
/*" if (bRed == 1)" \
" fragColor = vec4(1.0, 0.0, 0.0, 1.0);" \
" else " \
" fragColor = vec4(0.0, 1.0, 0.0, 1.0);" \*/
// Added texture tiling to the frag shader
" fragColor = texture(u_sampler, outTex);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
// LESSON 58
glBindAttribLocation(shader_program_obj, TEXTURE, "tex");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 56
color_uniform = glGetUniformLocation(shader_program_obj, "color");
// LESSON 57
bRed = glGetUniformLocation(shader_program_obj, "bRed");
// LESSON 58
sampler_uniform = glGetUniformLocation(shader_program_obj, "u_sampler");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
-1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
// Perspective square (Bottom face)
1.0f, -1.0f, -1.0f, // Right top
-1.0f, -1.0f, -1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Back face)
1.0f, 1.0f, -1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Right face)
1.0f, 1.0f, -1.0f, // Right top
1.0f, 1.0f, 1.0f, // Left top
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Left face)
-1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f // Right bottom
};
// LESSON 58
const GLfloat textureUVs[] = {
/*0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f*/
// Give each UVs based on direction to viewer
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f
};
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
// LESSON 58
glGenBuffers(1, &vbo_tex_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_tex_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(textureUVs), textureUVs, GL_STATIC_DRAW);
glVertexAttribPointer(TEXTURE, 2, GL_FLOAT, GL_FALSE, 0, NULL); // 2nd param: 2 bec UVs
glEnableVertexAttribArray(TEXTURE);
// LESSON 59
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0); // LESSON 58: moved down here
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
// LESSON 55
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// LESSON 22
glEnable(GL_TEXTURE_2D);
load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE));
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
// LESSON 55
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(0.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 58
glActiveTexture(GL_TEXTURE0);
glUniform1i(sampler_uniform, 0);
// LESSON 59
glBindVertexArray(vao_square);
/*glDrawArrays(GL_TRIANGLE_FAN, 0, 4);*/
for (int i = 0; i < 21; i = i + 4)
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
angle += 0.1f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
// LESSON 58
bitmap = (HBITMAP)LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
//gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
// LESSON 58
// 2nd param: mipmap levels (by 2s),
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, bmp.bmWidth, bmp.bmHeight, 0, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
glGenerateMipmap(GL_TEXTURE_2D);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
#ifdef _WIN32
#include <windows.h>
#endif
// LESSON 46
#include <GL/glew.h>
#include <GL/gl.h>
#include <stdio.h>
#include <stdbool.h>
#include "vmath.h"
#include "texture.h"
#pragma comment(lib, "opengl32.lib")
// LESSON 46
#pragma comment(lib, "glew32.lib")
#pragma comment(linker, "/subsystem:windows")
// using namespace vmath;
LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
int initialize(void);
void resize(int, int);
void display(void);
// LESSON 22
void uninitialize(void);
void toggle_fullscreen(void);
bool load_texture(GLuint*, TCHAR[]);
HWND g_hwnd;
HDC g_hdc = NULL;
HGLRC g_hrc = NULL;
DWORD dwStyle;
HMONITOR hMonitor;
WINDOWPLACEMENT wpPrev = { sizeof(WINDOWPLACEMENT) };
bool bIsMonitorInfo;
bool bIsWindowPlacement;
bool bIsRunning = true;
bool bIsFullscreen = false;
// LESSON 22
GLuint texture;
// LESSON 60
GLuint texture_tri;
// LESSON 58
GLuint sampler_uniform;
GLuint vbo_tex_triangle;
GLuint vbo_tex_square;
// LESSON 48
GLuint shader_program_obj;
// LESSON 49
enum {
POSITION = 0,
// LESSON 51
COLOR = 1,
// LESSON 58
TEXTURE = 2,
};
GLuint vao_triangle;
GLuint vbo_position_triangle;
GLuint mvp_uniform;
// LESSON 51
GLuint vbo_triangle_color;
vmath::mat4 perspective_projection_matrix;
// LESSON 54
GLuint vao_square;
GLuint vbo_position_square;
GLuint vbo_square_color;
// LESSON 56
GLuint color_uniform;
int color_type = 0;
// LESSON 57
GLuint bRed;
int width = 800;
int height = 600;
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
// Window dimensions
int sWindowWidth = 800;
int sWindowHeight = 600;
int x = 0;
int y = 0;
int monitorHalfWidth = 0;
int monitorHalfHeight = 0;
int monitorWidth = GetSystemMetrics(SM_CXFULLSCREEN);
int monitorHeight = GetSystemMetrics(SM_CYFULLSCREEN);
// Centering the starting point
monitorHalfWidth = monitorWidth / 2;
monitorHalfHeight = monitorHeight / 2;
// Starting point
x = monitorHalfWidth - sWindowWidth / 2;
y = monitorHalfHeight - sWindowHeight / 2;
WNDCLASSEX wndclass;
HWND hwnd;
MSG msg;
TCHAR szAppName[] = TEXT("Win32-API-OpenGL-App");
wndclass.cbSize = sizeof(WNDCLASSEX);
wndclass.style = CS_HREDRAW | CS_VREDRAW;
wndclass.cbClsExtra = 0;
wndclass.cbWndExtra = 0;
wndclass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndclass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
wndclass.lpszClassName = szAppName;
wndclass.lpszMenuName = NULL;
wndclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
wndclass.lpfnWndProc = WndProc;
wndclass.hInstance = hInstance;
RegisterClassEx(&wndclass);
hwnd = CreateWindow(
szAppName,
TEXT("Win32-API-SDK"),
WS_OVERLAPPEDWINDOW,
x,
y,
sWindowWidth,
sWindowHeight,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hwnd, SW_NORMAL);
g_hwnd = hwnd;
int result = initialize();
while (bIsRunning == true) {
if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
if (msg.message == WM_QUIT) {
bIsRunning = false;
}
else {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else {
display();
}
}
return ((int)msg.wParam);
}
LRESULT CALLBACK WndProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_CHAR:
switch (wParam)
{
case 'f':
case 'F':
toggle_fullscreen();
break;
}
case WM_KEYDOWN:
switch (wParam)
{
// LESSON 57
case VK_NUMPAD0:
// Full screen
glViewport(0, 0, (GLsizei)width, (GLsizei)height);
break;
case VK_NUMPAD1:
// Lower left corner
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD2:
// Lower right corner
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD3:
// Upper left corner
glViewport(0, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD4:
// Upper right corner
glViewport((GLsizei)width / 2, (GLsizei)height / 2, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_NUMPAD5:
// Whole right side
glViewport((GLsizei)width / 2, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD6:
// Whole left side
glViewport(0, 0, (GLsizei)width / 2, (GLsizei)height);
break;
case VK_NUMPAD7:
// Whole upper half
glViewport(0, (GLsizei)height / 2, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD8:
// Whole lower half
glViewport(0, 0, (GLsizei)width, (GLsizei)height / 2);
break;
case VK_NUMPAD9:
// Centered
glViewport((GLsizei)width / 4, (GLsizei)height / 4, (GLsizei)width / 2, (GLsizei)height / 2);
break;
case VK_ESCAPE:
uninitialize();
PostQuitMessage(0);
break;
}
break;
case WM_SIZE:
// resize(LOWORD(lParam), HIWORD(lParam));
width = LOWORD(lParam);
height = LOWORD(lParam);
break;
case WM_DESTROY:
uninitialize();
PostQuitMessage(0);
break;
}
return (DefWindowProc(hwnd, uMsg, wParam, lParam));
}
int initialize()
{
PIXELFORMATDESCRIPTOR pfd;
int iPixelFormatIndex;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cRedBits = 8;
pfd.cGreenBits = 8;
pfd.cBlueBits = 8;
pfd.cAlphaBits = 8;
g_hdc = GetDC(g_hwnd);
iPixelFormatIndex = ChoosePixelFormat(g_hdc, &pfd);
if (iPixelFormatIndex == 0) {
return -1;
}
if (SetPixelFormat(g_hdc, iPixelFormatIndex, &pfd) == FALSE) {
return -2;
}
g_hrc = wglCreateContext(g_hdc);
if (g_hrc == NULL) {
return -3;
}
if (wglMakeCurrent(g_hdc, g_hrc) == FALSE) {
return -4;
}
// LESSON 46
GLenum result = glewInit();
if (result != GLEW_OK) {
return -5;
}
// LESSON 48 (You can write multiple vs and fs shaders)
// Setting up the vertex shader
GLuint vertex_shader_obj = glCreateShader(GL_VERTEX_SHADER); // Give the pointer to the vertex shader obj (this will create the shader)
const GLchar* vertex_shader = "#version 450 core" \
"\n" \
"in vec3 vpos;" \
"in vec3 color;" \
"in vec2 tex;" \
"out vec2 outTex;" \
"out vec3 outColor;" \
"uniform mat4 mvp_matrix;" \
"void main()" \
"{" \
" gl_Position = mvp_matrix * vec4(vpos, 1.0f);" \
" outColor = color;" \
" outTex = tex;" \
"}";
glShaderSource(vertex_shader_obj, 1, (const GLchar**)&vertex_shader, NULL); // This will take the vert shader and fill the shader in the vs into the vs obj (sec param is nr of shaders to compile) (4th is amount of lines to compile from top)
glCompileShader(vertex_shader_obj);
// Setting up fragment shader
GLuint fragment_shader_obj = glCreateShader(GL_FRAGMENT_SHADER);
// LESSON 49 (core tells ogl to use the core (latest shader vers vs legacy))
// Emitting a blue color to whatever the vert has passed
const GLchar* fragment_shader = "#version 450 core" \
"\n" \
"in vec3 outColor;" \
"in vec2 outTex;" \
"uniform sampler2D u_sampler;" \
"out vec4 fragColor;" \
"uniform vec3 color;" \
"uniform int bRed;" \
"void main()" \
"{" \
/*" if (bRed == 1)" \
" fragColor = vec4(1.0, 0.0, 0.0, 1.0);" \
" else " \
" fragColor = vec4(0.0, 1.0, 0.0, 1.0);" \*/
// Added texture tiling to the frag shader
" fragColor = texture(u_sampler, outTex);" \
"}";
glShaderSource(fragment_shader_obj, 1, (const GLchar**)&fragment_shader, NULL);
glCompileShader(fragment_shader_obj);
shader_program_obj = glCreateProgram();
glAttachShader(shader_program_obj, vertex_shader_obj);
glAttachShader(shader_program_obj, fragment_shader_obj);
// LESSON 49
glBindAttribLocation(shader_program_obj, POSITION, "vpos");
// LESSON 51
glBindAttribLocation(shader_program_obj, COLOR, "color");
// LESSON 58
glBindAttribLocation(shader_program_obj, TEXTURE, "tex");
glLinkProgram(shader_program_obj);
// LESSON 49
mvp_uniform = glGetUniformLocation(shader_program_obj, "mvp_matrix");
// LESSON 56
color_uniform = glGetUniformLocation(shader_program_obj, "color");
// LESSON 57
bRed = glGetUniformLocation(shader_program_obj, "bRed");
// LESSON 58
sampler_uniform = glGetUniformLocation(shader_program_obj, "u_sampler");
// LESSON 51
const GLfloat triangleColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
};
// LESSON 54
const GLfloat squareColor[] = {
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
//// LESSON 49
const GLfloat triangleVertices[] = {
// Perspective triangle (Front face)
0.0f, 1.0f, 0.0f, // Apex
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, -1.0f,
0.0f, 1.0f, 0.0f,
-1.0f, -1.0f, -1.0f,
-1.0f,-1.0f, 1.0f
};
// LESSON 54
const GLfloat squareVertices[] = {
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f,
// Perspective square (Bottom face)
1.0f, -1.0f, -1.0f, // Right top
-1.0f, -1.0f, -1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Front face)
1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, 1.0f, // Left top
-1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, 1.0f, // Right bottom
// Perspective square (Back face)
1.0f, 1.0f, -1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Right face)
1.0f, 1.0f, -1.0f, // Right top
1.0f, 1.0f, 1.0f, // Left top
1.0f, -1.0f, 1.0f, // Left bottom
1.0f, -1.0f, -1.0f, // Right bottom
// Perspective square (Left face)
-1.0f, 1.0f, 1.0f, // Right top
-1.0f, 1.0f, -1.0f, // Left top
-1.0f, -1.0f, -1.0f, // Left bottom
-1.0f, -1.0f, 1.0f // Right bottom
};
// LESSON 58
const GLfloat textureUVs[] = {
/*0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f*/
// Give each UVs based on direction to viewer
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f
};
const GLfloat triTextUVs[] = {
0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
0.5f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f
};
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 54
glGenVertexArrays(1, &vao_square);
glBindVertexArray(vao_square);
glGenBuffers(1, &vbo_position_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(squareVertices), squareVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
// LESSON 58
glGenBuffers(1, &vbo_tex_square);
glBindBuffer(GL_ARRAY_BUFFER, vbo_tex_square);
glBufferData(GL_ARRAY_BUFFER, sizeof(textureUVs), textureUVs, GL_STATIC_DRAW);
glVertexAttribPointer(TEXTURE, 2, GL_FLOAT, GL_FALSE, 0, NULL); // 2nd param: 2 bec UVs
glEnableVertexAttribArray(TEXTURE);
// LESSON 60
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// LESSON 60
glGenVertexArrays(1, &vao_triangle);
glBindVertexArray(vao_triangle);
glGenBuffers(1, &vbo_position_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_position_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangleVertices), triangleVertices, GL_STATIC_DRAW);
glVertexAttribPointer(POSITION, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(POSITION);
// LESSON 60
glGenBuffers(1, &vbo_tex_triangle);
glBindBuffer(GL_ARRAY_BUFFER, vbo_tex_triangle);
glBufferData(GL_ARRAY_BUFFER, sizeof(triTextUVs), triTextUVs, GL_STATIC_DRAW);
glVertexAttribPointer(TEXTURE, 2, GL_FLOAT, GL_FALSE, 0, NULL); // 2nd param: 2 bec UVs
glEnableVertexAttribArray(TEXTURE);
// LESSON 59
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0); // LESSON 58: moved down here
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// LESSON 52
perspective_projection_matrix = vmath::mat4::identity();
resize(800, 600);
// LESSON 55
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// LESSON 22
glEnable(GL_TEXTURE_2D);
load_texture(&texture, MAKEINTRESOURCE(IDBITMAP_TEXTURE1));
load_texture(&texture_tri, MAKEINTRESOURCE(IDBITMAP_TEXTURE2));
return 0;
}
void resize(int w, int h)
{
if (h == 0)
h = 1;
glViewport(0, 0, (GLsizei)w, (GLsizei)h);
// LESSON 49
perspective_projection_matrix = vmath::perspective(45.0f, (GLfloat)w / (GLfloat)h, 0.1f, 100.0f);
}
void display(void)
{
// LESSON 55
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// LESSON 49
glUseProgram(shader_program_obj);
// LESSON 52
vmath::mat4 modelviewmatrix;
vmath::mat4 modelviewprojection;
static GLfloat angle = 0.0f;
// LESSON 60
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 0.0f, 1.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 58
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(sampler_uniform, 0);
// LESSON 59
glBindVertexArray(vao_square);
/*glDrawArrays(GL_TRIANGLE_FAN, 0, 4);*/
for (int i = 0; i < 25; i = i + 4)
glDrawArrays(GL_TRIANGLE_FAN, i, 4);
// LESSON 60 (moved down here)
modelviewmatrix = vmath::mat4::identity();
modelviewprojection = vmath::mat4::identity();
modelviewmatrix = vmath::translate(-1.0f, 0.0f, -3.0f);
modelviewmatrix *= vmath::scale(0.5f, 0.5f, 0.5f);
modelviewmatrix *= vmath::rotate(angle, 1.0f, 0.0f, 0.0f);
modelviewprojection = perspective_projection_matrix * modelviewmatrix;
glUniformMatrix4fv(mvp_uniform, 1, GL_FALSE, modelviewprojection);
// LESSON 60
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_tri);
glUniform1i(sampler_uniform, 0);
glBindVertexArray(vao_triangle);
glDrawArrays(GL_TRIANGLES, 0, 16);
angle += 0.1f;
SwapBuffers(g_hdc);
}
void uninitialize(void)
{
if (bIsFullscreen == true)
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
}
if (wglGetCurrentContext() == g_hrc) {
wglMakeCurrent(NULL, NULL);
}
if (g_hrc) {
wglDeleteContext(g_hrc);
g_hrc = NULL;
}
if (g_hdc) {
ReleaseDC(g_hwnd, g_hdc);
g_hdc = NULL;
}
// glDeleteShader(vertex_shader);
// glDeleteShader(fragment_shader);
// glDeleteBuffers();
}
void toggle_fullscreen(void)
{
MONITORINFO mi;
if (bIsFullscreen == false) {
mi.cbSize = sizeof(MONITORINFO);
dwStyle = GetWindowLong(g_hwnd, GWL_STYLE);
if (dwStyle & WS_OVERLAPPEDWINDOW) {
bIsWindowPlacement = GetWindowPlacement(g_hwnd, &wpPrev);
hMonitor = MonitorFromWindow(g_hwnd, MONITORINFOF_PRIMARY);
bIsMonitorInfo = GetMonitorInfo(hMonitor, &mi);
if (bIsWindowPlacement == true && bIsMonitorInfo == true) {
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle & ~WS_OVERLAPPEDWINDOW);
SetWindowPos(g_hwnd, HWND_TOP,
mi.rcMonitor.left,
mi.rcMonitor.top,
mi.rcMonitor.right - mi.rcMonitor.left,
mi.rcMonitor.bottom - mi.rcMonitor.top,
SWP_NOZORDER | SWP_FRAMECHANGED);
}
}
ShowCursor(FALSE);
bIsFullscreen = true;
}
else
{
SetWindowLong(g_hwnd, GWL_STYLE, dwStyle | WS_OVERLAPPEDWINDOW);
SetWindowPlacement(g_hwnd, &wpPrev);
SetWindowPos(g_hwnd, HWND_TOP, 0, 0, 0, 0, SWP_NOZORDER | SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER);
ShowCursor(TRUE);
bIsFullscreen = false;
}
}
bool load_texture(GLuint* texture, TCHAR imageResourceId[])
{
HBITMAP bitmap = NULL;
BITMAP bmp;
bool bStatus = false;
// LESSON 58
bitmap = (HBITMAP)LoadImage(GetModuleHandle(NULL), imageResourceId, IMAGE_BITMAP, 0, 0, LR_CREATEDIBSECTION);
if (bitmap != NULL) {
GetObject(bitmap, sizeof(BITMAP), &bmp);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
// Generate texture
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
// Texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// Texture wrapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_NEAREST);
//gluBuild2DMipmaps(GL_TEXTURE_2D, 3, bmp.bmWidth, bmp.bmHeight, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
// LESSON 58
// 2nd param: mipmap levels (by 2s),
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, bmp.bmWidth, bmp.bmHeight, 0, GL_BGR_EXT, GL_UNSIGNED_BYTE, bmp.bmBits);
glGenerateMipmap(GL_TEXTURE_2D);
DeleteObject(bitmap);
bStatus = true;
}
return bStatus;
}
From Fermant's Library via Linkedin
π = 666 / 212
It's a palidromic approximation of PI
The Phi Ratio (Golden Mean)
The Golden Mean, represented by the Greek letter phi, is an irrational number, like e or pi, which seems to arise out of the basic
structure of nature. It is defined as a ratio of heigh to width of 1:1.618. The Phi Ratio appears regularly in the proportions
of plants, animals, DNS, the solar system and even, population growth. Since early mankind, the phi ratio has been represented
in the design of structures, tools and creative arts.
Its use in early societies that had no formal understanding of mathematics leads scholars to believe that phi is a fundamental
subconscious aesthetic preference.
The Phi Ratio is applied more commonly today in graphic artts, photography and design as the "Rule of Thirds". The Rule of Thirds
states that people have a strong visial attraction to objects that reside at the intersections of hypothetical lines on a page
or in a photograph that is divided into thirds vertically and/or horizontally (Lindwell, Holden, Butler 2003). The rule of thirds
is simply a representation of the world in which welive. Whether natural as in beach, water and sky or manmade as in floors, walls and
ceilings, these horizontal divisions are so innate to our nature that they are beaely, if ever, consciously preceived.
A is to B as B is to C______________
A ------------
B --------
C ----
|-------------| A
| |
|--------|----| B & C
| |
_______________
Try implementing a full world with lots of models to create realistic scenary.
glBegin(GL_LINE_STRIP);
for (i = p1; i <= p2; i++) {
glEvalcoord1(u1 + i*(u2-u1)/n);
}
glEnd();
Note: if i = 0 and i = n, then glEvalcoord1() is called with exactly u1 or u2 as parameter.
Please see OpenGL Programming Guide 7th Ed. p 578 for further details
Two-dimensional Evaluators [...] must take u and v into account. Points, colors, normals, or texture coordinates must be supplied over a surface instead of a curve. Mathematically, the definition of a Bézier surface patch is given by
S(u, v) = n sigma i = 0, m sigma j = 0, B SupN SubI (u) B Supm Subj (v) P Subi Subj
where Pij values are a set of m*n control points, and the Bi functions are the same Bernstein polynomials for one dimension. As before, the Pji values can represent vertices, normals, colors, or texture coordinates.
The procedure for using two-dimensional evaluators is similar to the procedure for one dimension:
Please see OpenGL Programming Guide 7th Ed. p 578 for further details
DSA for OpenGL 4.4: https://registry.khronos.org/OpenGL/extensions/ARB/ARB_direct_state_access.txt
8 daily habits that will make you a better developer:
1. Dedicate time to learning
Block out specific time in your calendar for learning new languages, frameworks or best practices. The landscape is constantly evolving and you need to build the habit of staying ahead of the curve.
2. Practice-problem solving
Coding is all about problem solving. Tackle some coding challenges, build a real puzzle or do some brain teaser. Keep your mind active and improve your problem solving skills.
3. Collaborate and communicate
Great developers are team players. Practice effective communication through sharing your knowledge, ideas and experience with others.
4. Embrace code reviews
Don't shy away from reviews. Being open to feedback is the best habit you can have.
5. Prioritize time management
Break down work into smaller, more manageable tasks. This helps you to stay focuses and productive.
6. Write clean code
Take the extra time required to write cleaner code. It's an investment that has the best returns.
7. Test, Test, Test
Build a habit of thorough testing. You'll thank yourself later.
8. Take care of yourself
Your health and mindset directly impact your productivity. A healthy developer is a productive developer.
Fermat's Library
The fifth hyperfactorial: 5sub5 × 4sub4 × 3³ × 2² × 1¹ = 86400000 milliseconds is exactly 1 day
1 day has 24 hours: 24=4·3·2
1 hour has 60 minutes: 60=5·4·3
1 minute has 60 seconds: 60=5·4·3
1 second has 1000 milliseconds: 1000=5·5·5·4·2
Source: https://www.reddit.com/r/GraphicsProgramming/comments/yibs49/math_struggle_with_dot_products_vector_algebra/
I'm currently working through the ray tracing in one weekend books and while I've now got a pretty good grasp on everything in here, I'm really hung not being able to understand a bit of math surrounding this:
(A + tb − C) ⋅ (A + tb − C) = r^2
The rules of vector algebra are all that we would want here. If we expand that equation and move all the terms to the left hand side we get:
t^2 b ⋅ b + 2tb ⋅ (A−C) + (A−C) ⋅ (A−C) − r^2 = 0
Anyone able to help explain how the author has made this jump using vector algebra? Thanks.
---
(A + tb − C) ⋅ (A + tb − C) = r2
(tb + (A − C)) ⋅ (tb + (A − C)) - r2 = 0
t2 b · b + tb · (A - C) + (A - C) · tb + (A - C) ⋅ (A - C) - r2 = 0
t2 b · b + 2 tb · (A - C) + (A - C) · (A - C) - r2 = 0
Just using the associative property of vector addition and the commutative property of the dot product.
---
Dot products distribute like multiplication (you can prove it to yourself if you write out the x, y, z components).
If you rewrite A-C as X, and tb as Y. Then you have your typical (X+Y)2 form = X2 + 2XY + Y2. Where scalar multiplication is replaced with a vector dot product.
Read more about:
- Bernstein polynomial, Bézier Curves and Surfaces
In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein.
A numerically stable way to evaluate polynomials in Bernstein form is de Casteljau's algorithm.
Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval [0, 1],
became important in the form of Bézier curves.
- Affine Transformations
In Euclidean geometry, an affine transformation, or an affinity (from the Latin, affinis, "connected with"), is a geometric transformation that preserves lines and parallelism (but not necessarily distances and angles).
More generally, an affine transformation is an automorphism of an affine space (Euclidean spaces are specific affine spaces), that is, a function which maps an affine space onto itself while preserving both the dimension of
any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain
parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a
straight line.
If X is the point set of an affine space, then every affine transformation on X can be represented as the composition of a linear transformation on X and a translation of X. Unlike a purely linear transformation, an affine
transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.
Examples of affine transformations include translation, scaling, homothety, similarity, reflection, rotation, shear mapping, and compositions of them in any combination and sequence.
Viewing an affine space as the complement of a hyperplane at infinity of a projective space, the affine transformations are the projective transformations of that projective space that leave the hyperplane at infinity
invariant, restricted to the complement of that hyperplane.
A generalization of an affine transformation is an affine map[1] (or affine homomorphism or affine mapping) between two (potentially different) affine spaces over the same field k. Let (X, V, k) and (Z, W, k) be two
affine spaces with X and Z the point sets and V and W the respective associated vector spaces over the field k.
A map f: X → Z is an affine map if there exists a linear map mf : V → W such that mf (x − y) = f (x) − f (y) for all x, y in X.[2]
Define a Physical Renderer with F1-F12 to toggle rendering mode and positions
Create a Powered-by newscasting website
Calculate the normals to display lighting
double[] calculate_normal(double[] a, double[] b, double[] c)
{
double[] x = {b[0] - a[0], b[1] - a[1], b[2] - a[2]};
double[] y = {c[0] - a[0], c[1] - a[1], c[2] - a[2]};
double[] result = {x[1] * y[2] - y[1] * x[2], -(x[0] * y[2] - y[0]* x[2]), x[0] * y[1] - y[0] * x[1]};
return result;
}
...
void display()
{
...
glBegin(GL_TRIANGLES);
glNormal3f(calculate_normal(a, b, c));
glVertex2f(...);
glEnd();
}
Full playlist of topics: https://www.youtube.com/playlist?list=PLplnkTzzqsZS3R5DjmCQsqupu43oS9CFN
C = I (cosθ Kd + Ks(cosφα))
The cosine part inside the parentesis are called the Geometry Term, which is a function of the incoming light to the surface, and can be written as:
C = I cosθ (Kd + Ks (cosφ)α)⁄cosθ)
This gives a freflectance function:
C = I cosθ ∫r(ω, v)
Note: d in the equation stands for difuse, which is constant over all directions, while s stands for specular and is which depends on the incoming light directions.
L0(ω0) = ∫ Ω (ωi) cosθi fr (ωi, ω0) dωi
How to interpret this:
https://pbs.twimg.com/media/CHW_bGCUwAAIS1r.png from: http://viclw17.github.io/2018/06/30/raytracing-rendering-equation/
What if you have a bunch of light sources?
L0(ω0) = Σ Li (ωi) cosθi fr (ωi, ω0)
What if light is coming from all directions (above a hemisphere):
L0(ω0) = Li (ωi) cosθi fr (ωi, ω0)
What if you have a sun and a sky the reflects the light:
L0(ω0) = ∫Ω Lsky (ωi) cosθi fr (ωi, ω0) dωi
+ Lsun (ωsun) cosθsun fr (ωsun, ω0)
What if the light is a single source, eg the sun:
L0(ω0) = ∫Ω Li (ωi) cosθi fr (ωi, ω0) dωi
What if you have a specular surface an a direct and indirect light source:
L0(ω0) = ∫Ω Li (ωi) cosθi fr (ωi, ω0) dωi
Li(ωi) = Ldirect(ωi) + Lindirect(ωi)
Subsurface scattering / refraction:
L0(ω0) = ∫ S2 Li (ωi) cosθi fs (ωi, ω0) dωi
+ Lemission(ω0)
Newer OpenGL functions:
source: Cem Yuksel
#include <glext.h>
#include <wingdi.h>
PFNGLGENVERTEXARRAYSPROC glGenVertexArrays = (PFNGLGENVERTEXARRAYSPROC) wglGetProcAddress("glGenVertexArrays");
typedef void (*PFNGLGENVERTEXARRAYSPROC)(GLsizei n, GLuint *arrays);
This is the most effective way...
Better way that initialize all the modern OpenGL functions:
#include <glew.h>
glewInit();
GLint64 timer;
glGetInteger64v(GL_TIMESTAMP, &timer);
printf("Milliseconds: %f\n", timer/1000000.0);
C JAM notes:
FUN FACT
There are 365.2422 days in a Solar year. The average Gregorian calendar year has 365.2425 days. There is a 0.0003 day difference between the Gregorian and the Solar years. For this reason the Gregorian calendar will advance 1 day every 3,333 years.
KindFile3 3 poeng 4 timer siden
I'd go with Macros:
#include
#define __timing(f) do { printf("\n\nstarting\n"); f; printf("\nending"); }while(0);
int foo(){
printf("foo!");
}
int bar(int i){
printf("bar: %d!",i);
}
int main(){
__timing(foo());
__timing(bar(1));
return 0;
}
This prints:
starting
foo!
ending
starting
bar: 1!
ending
Obviously you will need to replace printf("starting") and printf("ending") with the start and end timing functions, which I'd have it globally, so start set current time, and later do the math new time - old time.
And one neat thing about this is you can use _DEBUG flag and have 2 versions like:
#if _DEBUG
#define __timing(f) do { printf("\n\nstarting\n"); f; printf("\nending"); }while(0);
#else
#define __timing(f) do { f; }while(0);
So if NOT in _DEBUG mode you can disable it.
#endif
struct point makepoint (int x, int y)
{
struct point temp;
temp.x = x;
temp.y = y;
return temp;
}
struct rect screen;
struct point middle;
struct point makepoint(int, int);
screen.pt1 = makepoint(0, 0);
screen.pt2 = makepoint(XMAX, YMAX);
middle = makepoint((screen.pt1.x + screen.pt2.y) / 2, (screen.pt1.y + screen.pt2.y) / 2);
Best way to calculate normals of a changing mesh? CeruleanBoolean141
Hello again! I am working on a GPGPU project to simulate hydraulic erosion over some terrain. The map I start with has normals, but when the erosion changes the shape of the terrain mesh, I need these normals to update.
The only thing I can think to do is use a geometry shader to recalculate the normals (either every frame or every N frames). My question is: is a geometry shader the way to go, or is there a better solution?
Thanks to anyone who takes the time to read this.
If you're happy with the flat shaded look you can use the derivative functions to calculate a normal.
vec3 X = dFdx ( vertexPosition );
vec3 Y = dFdy ( vertexPosition );
vec3 normal = normalize ( cross ( X, Y ) );
Smooth normals are also possible using this technique but it involves multiple passes. Here's(*) an example implementation of that in Cinder.
* = Vertex displacement mapping is a technique where a texture or a procedural function is used to dynamically change the position of vertices in a mesh. It is often used to create a terrain with mountains from a height field, ocean waves or an animated flag.
For dynamically changing height fields, like an animated flag, not only the vertex position should change, but also its associated normal vector. One way to calculate the new normal would be to use a geometry shader. This is often called per-face normal computation on the GPU, because it can only calculate one normal vector for the whole triangle. The result looks faceted: each triangle will appear to be flat.
To overcome this, you can use a normal map. This is an additional texture that corresponds to the height field and contains offsets to the original surface normals. A shader can then fetch the normal from this texture on a per pixel basis. The great news is that normals are even automatically interpolated, so per-pixel shading will look incredibly smooth.
You should calculate a new normal map every time your height field (a.k.a. displacement map) changes. This can easily be done on the GPU. Using floating point textures, this is even easier and can be done using a simple fragment shader.
This sample will show you how to:
render to a floating point texture to create a displacement map on-the-fly
render the corresponding normal map, also a floating point texture
use both maps to render an animated, transparent piece of cloth that looks a lot like Sony PlayStation's XrossMediaBar background (but that's just a coincidence, don't you think? ;))
All of this is done on the GPU, very little work remains for the CPU. The sample can therefor easily run at 300+ FPS. Note that you do need a modern GPU for this, with support for floating point textures and vertex shader texture fetch. If your GPU supports shader model 3, you're probably good to go.
Source: https://github.com/paulhoux/Cinder-Samples/tree/master/SmoothDisplacementMapping
Simulating Mesh shader in compute shader: https://tellusim.com/mesh-shader-emulation/
/* allc.c - This showcases all the syntatic
* features of the C programming language.
*/
/* preprocessing directive */
#if 1 /* if directive */
#define NULL (void *)0 /* define directive */
#undef NULL /* undef directive */
#ifdef D /* ifdef directive */
#endif
#ifndef D /* ifndef directive */
#endif
#pragma deadbeef /* pragma directive */
#else /* else directive */
#if defined 0
#endif
#include /* include directive */
#error "error" /* error directive */
#endif /* endif directive */
/* variadic macro */
#define VAMACRO(...) __VA_ARGS__
/* static assertion */
_Static_assert(sizeof(char) == 1, "test");
/* stringizing operator */
#define stringize(x) #x
/* token pasting operator */
#define concat(a, b) a##b
/* external storage types and example of variadic function */
extern int printf(const char *, ...);
/* function pointer */
int (*ifunc)(void);
/* array of function pointer */
int (*afunc[10])(void);
/* enumerations */
enum enum_t { EA, EB, EC, ED };
enum { VA, VB, VC, VD } enum_variation;
/* structures */
struct struct_t {
char a, b, c;
/* bitfields */
int b1:4, b2:4;
/* anonymous union */
union {
int e, f;
};
};
struct {
char a, b, c;
} struct_variation;
/* unions */
union union_t {
int a, b;
/* anonymous struct */
struct {
int c, d;
};
};
union {char a;}; union_variation;
/* alias for a data type */
typedef struct struct_t struct_t;
typedef union union_t union_t;
typedef enum enum_t enum_t;
/* trigraphs */
int trigraphsavailable()
// returns 0 or 1; language standard C99 or later
{
// are trigraphs available??/
return 0;
return 1;
}
_Noreturn void noret(void){
return;
}
/* function */
int main() {
/* primitive types */
char a; int b; float c;
double d; void *p; float _Complex Co = 1.0+(float _Complex)1.0i;
_Bool Bo;
/* array declaration */
int ar[10];
/* string literal */
u8"Hello " "C";
u"Hello " "C";
U"Hello " "C";
L"Hello " "C";
/* character literal*/
a = 'a';
/* escape sequence */
"\n" /* newline */
"\t" /* horizontal tab */
"\\" /* backslash */
"\f"
"\r" /* carriage return */
"\?"
"\v"; /* vertical tab */
/* stringize operator */
stringize(Hello C); /* "Hello C" */
/* Token pasting operator */
concat(a, r); /* ar */
/* demonic array */
1 [ar] = 1;
/* array designated initializer */
int ar2[10] = {[0] = 1, [1] = 2, [2] = 3};
/* struct designated initializer */
struct_t s = {.a = 1, .b = 2, .c = 3};
/* arrow operator */
(&s)->a;
/* dot operator */
(struct_t){ }.a = 1;
(union_t){ }.a = 1;
/* compound literals */
(int[]){1, 2, 3, 4};
/* compound literals with designated initializer*/
(int[]){[0] = 1, [1] = 2, [2] = 3, 4};
/* assignment operators */
a = 0; a *= 1;
a /= 1; a += 1;
a -= 1; a <<= 1;
a >>= 1; a &= 1;
a ^= 1; a |= 1;
a %= 1;
/* address and indirection operators */
&a; *ar;
/* arithmetic operators */
1 + 1; /* ADD */
1 - 1; /* SUB */
1 / 2; /* DIV */
1 * 1; /* MUL */
1 % 1; /* MOD */
/* logical operators */
1 && 1; /* AND */
1 || 0; /* OR */
!1; /* NOT */
/* increment and decrement operators */
a++; a--; /* post fix */
++a; --a; /* pre fix */
/* bitwise operators */
1 << 2; /* left shift */
1 >> 2; /* right shift */
1 | 1; /* OR */
1 & 1; /* AND */
1 ^ 1; /* XOR */
~1; /* 1st complement */
/* relational operators */
1 > 0; /* 1 is greater than 0 */
1 < 0; /* 1 is less than 0 */
1 == 0; /* 1 is equal to 0 */
1 != 0; /* 1 is not equal to 0 */
1 >= 0; /* 1 is greater than or equal to 0 */
1 <= 0; /* 1 is less than or equal to 0 */
/* conditions */
if (1 > 0)
;
else
;
/* ternary operators */
(1 > 0) ? 1 : 0; /* if 1 > 0, return 1, else 0 */
/* label and goto */
goto l1;
l1:
/* loops */
do while(0);
while (0)
;
for (; 0;)
continue;
;
/* block scoping */
{}
/* switch statement */
switch (1){
case 0:
case 1:
default:
break;
}
/* digraphs */
ar<:0:> = 1; /* ar[0] */
<% %> /* { } */
%:define BEEF /* #define BEEF */
/* demonic digraphs */
0<:ar:> = 1;
/* storage-class specifier */
static st; register re;
auto au; extern ex; _Thread_local static Thr;
/* _Generic */
_Generic((10), int: 1, char: 'A', default: "test");
/* type qualifiers */
const cons;
int *restrict res;
volatile vo;
_Atomic At;
/* signed and unsigned types */
signed si; unsigned un;
/* sizeof, _Alignof and _Alignas operators */
sizeof(int); _Alignof(int);
_Alignas(4) char calign[4];
/* integer constants */
1; /* decimal */
01 ; /* octal */
0x01; /* hexadecimal */
/* floating point constants */
/* decimal floating point constant */
1.0e+1f;
1e1f;
/* hexadecimal floating point constant */
0x01.00p+1f;
0x1p+1f;
/* return keyword */
return 0;
}
Pretty sure "demonic arrays" was stated in the C standard. ~I just made it up~
Forgive me if I forgotten something. I wonder how it would like with other language like C++.
8 kommentarerdellagregjemgive awardrapportercrosspost
alle 8 kommentarer
sorter etter: beste
formateringshjelpinnholdspolicy
save
[–]tstanisl 9 poeng 3 timer siden*
Missing things:
variadic macro #define M(x, ...)
__VA_ARGS__,
_Alignas from C11
_Atomic
_Complex
_Bool
Edit
const, consider constant pointer
volatile is not a storage specifier
comma operator e.g. 1,2
const/restrict/static/volatile specifier for array parameters:
void foo(int A[restrict const volatile static 42]);
restrict and _Thread_local (thx to u/trBlueJ)
continue
long, including long double
short
_Generic
_Imaginary
_Noreturn
_Static_assert
__func__
u,U,l,L,ul, UL, ll,LL, ull, ULL suffixes for integer literals
u8, u, U, L prefixes for string literals
escape characters e.g. \n, \\
character literals, like 'a'
string literals, "Hello"
unary +, -, and * (dereference)
a ^= 1 is duplicated
macro concatenation ##
macro stringification #
macro defined
trailing comma for initilizers: int A[] = {1,2,3 , }
Edit2: - function calls (thx u/potterman28wxcv ) - bitfields in structs - casts
permalenkeembedlagrerapportergive awardsvar
[–]trBlueJ 2 poeng 2 timer siden
There's also _Thread_local.
permalenkeembedlagreforeldrerapportergive awardsvar
[–]potterman28wxcv 2 poeng 2 timer siden
And function calls! :P
permalenkeembedlagreforeldrerapportergive awardsvar
[–]tstanisl 3 poeng 3 timer siden
you could typedef an enum or a function as well.
typedef enum { E } enum_type;
typedef int fun_type(int);
Did you know that in Blender you can easily animate oscillating objects with a single driver expression? ✨
1. Select your object
2. In its rotation write:
"# sin(frame / 300 * pi * 2 * 3) * 0.5"
where 3 is the number of swings it does, and 0.5 is the amplitude multiplier (use a bigger number for bigger swings), and 300 is your end frame
3. Enjoy the perfect loop!
Source: https://twitter.com/passivestar_/status/1665634739288518661
Name your cam-, eye- and up-vectors: px, py, pz (or camera_x, ...), vx, vy, vz (view_x, ...), r/ux, r/uy, r/uz (up_x, ...)
EDIT: I am starting my reply from scratch as I may have assumed too much familiarity with the subject.
The problem you are facing is that your formulas are basically not correct : your formula for rotating left/right are correct under the assumption that the "up" vector (the vector pointing upward from the camera) is always [0 1 0]
... which is not the case if you also want to rotate up/down.
And your formula for rotating up down is not correct since it only modifies the Y component and rotations do not work that way.
The correct way to handle that is:
to store 3 variables that represent the camera position (as you did). Let's call them Px, Py, Pz
to store 3 variables that represent the camera view direction (instead of your eyeX/Y/Z that encode the point the camera is looking at). Let's call them Vx, Vy, Vz
to store 3 variables that represent the camera right vector (or up vector, as you wish). Let's take the right vector, and call it Rx, Ry, Rz.
Alternatively, you can have a nice "Vector" class that represent vectors instead of storing 3 variables each time. This is a detail at this point.
Now, your method to move your camera forward just becomes:
Px += Vx;
Py += Vy;
Pz += Vz;
You can use, for example, Rodrigues formula to rotate (hoping nobody will launch at you the "quaternion" magic word to express their cleverness ;) ). The general self-contained code to rotate around an arbitrary axis would then be:
// rotate the vector (vx, vy, vz) around (ax, ay, az) by an angle "angle"
void rotate(double &vx, double &vy, double &vz, double ax, double ay, double az, double angle) {
double ca = cos(angle);
double sa = sin(angle);
double crossx = -vy*az + vz*ay;
double crossy = -vz*ax + vx*az;
double crossz = -vx*ay + vy*ax;
double dot = ax*vx + ay*vy + az*vz;
double rx = vx*ca + crossx*sa + dot*ax*(1-ca);
double ry = vy*ca + crossy*sa + dot*ay*(1-ca);
double rz = vz*ca + crossz*sa + dot*az*(1-ca);
vx = rx;
vy = ry;
vz = rz;
}
And make sure to keep normalized coordinates for your camera vectors.
Now, to specifically rotate your camera up/down when you press a button:
// rotate up:
rotate(Vx, Vy, Vz, Rx, Ry, Rz, some_CONSTANT_angle);
// rotate down:
rotate(Vx, Vy, Vz, Rx, Ry, Rz, - some_CONSTANT_angle);
To rotate left/right, you first need to compute the "Up" vector that doesn't need to be stored (unless you want to, but it is redundant), and rotate both your view direction and right vectors:
// find up vector using a cross product:
double Ux = Ry*Vz - Rz*Vy;
double Uy = Rz*Vx - Rx*Vz;
double Uz = Rx*Vy - Ry*Vx;
//rotate left
rotate(Rx, Ry, Rz, Ux, Uy, Uz, some_CONSTANT_angle);
rotate(Vx, Vy, Vz, Ux, Uy, Uz, some_CONSTANT_angle);
// rotate right
rotate(Rx, Ry, Rz, Ux, Uy, Uz, - some_CONSTANT_angle);
rotate(Vx, Vy, Vz, Ux, Uy, Uz, - some_CONSTANT_angle);
Setting up your camera matrix now becomes:
gluLookAt( Px, Py, Pz, Px+Vx, Py+Vy, Pz+Vz, Rx, Ry, Rz); // or is it Ux, Uy, Uz at the end? don't remember.
Of course, I didn't test any of this code, and wrote it now. Hope it works!
Reddit.com/u/alt-no-more This is probably a very basic question, but I'm not even sure how to describe it well enough to ask google the answer. I've written a simple program in C that creates a stack via a linked list. The stack seems to work as intented, but I get a weird result when I print. Here's my code:
#include <stdio.h>;
#include <stdlib.h>;
typedef struct node {
int data;
struct node *next;
} node_t;
typedef struct stack {
node_t *head;
} stack_t;
void push(stack_t *stack, int value) {
node_t *node = (node_t*) malloc(sizeof(node_t*));
node->data = value;
node->next = stack->head;
stack->head = node;
}
int pop(stack_t *stack) {
int value = stack->head->data;
stack->head = stack->head->next;
return value;
}
int main(void) {
stack_t *stack1 = (stack_t*) malloc(sizeof(stack_t*));
push(stack1, 1);
push(stack1, 2);
push(stack1, 3);
push(stack1, 4);
printf("stack1\t");
printf("%d ", pop(stack1));
printf("%d ", pop(stack1));
printf("%d ", pop(stack1));
printf("%d\n", pop(stack1));
stack_t *stack2 = (stack_t*) malloc(sizeof(stack_t*));
push(stack2, 1);
push(stack2, 2);
push(stack2, 3);
push(stack2, 4);
printf("stack2\t%d %d %d %d\n", pop(stack2), pop(stack2), pop(stack2), pop(stack2));
return 0;
}
When I run this, I get the following output: stack1 4 3 2 1 stack2 1 2 3 4 I'm not sure why this is. In stack1, it's clear that everything pops in the correct order. There must be something I don't understand about how printf works
The compiler basically doesn't care what the file is called, whether it's a C or H extension, it just compiles the code all the same - like it may as well all be in one big source file. When you #include a header it just says "pretend this other file's contents exist here". Yes, there's the issue of scope - what you define in one C file doesn't exist in other C files unless externed via header but it's really not that complicated.
Perhaps a simple example could help. Consider the following:
[header.h]
int all_my_great_stuff(); /* forward declare */
#ifdef IMPLEMENTATION
int all_my_great stuff() /* implementation */
{
return 1;
}
#endif
and
[main.c]
#include "header,h"
/* only has forward declare at this point */
#define IMPLEMENTATION
#include "header,h"
/* has forward declare and implementation in *this* unit */
The main thing to note is that the IMPLEMENTATION should only defined in *one* unit or obviously you will get colliding symbols as multiple units provide the same stuff.
You could play around with static functions but that would be wasteful.
Source: https://stackoverflow.com/questions/840501/how-do-function-pointers-in-c-work
Let's start with a basic function which we will be pointing to:
int addInt(int n, int m) {
return n+m;
}
First thing, let's define a pointer to a function which receives 2 ints and returns an int:
int (*functionPtr)(int,int);
Now we can safely point to our function:
functionPtr = &addInt;
Now that we have a pointer to the function, let's use it:
int sum = (*functionPtr)(2, 3); // sum == 5
Passing the pointer to another function is basically the same:
int add2to3(int (*functionPtr)(int, int)) {
return (*functionPtr)(2, 3);
}
We can use function pointers in return values as well (try to keep up, it gets messy):
// this is a function called functionFactory which receives parameter n
// and returns a pointer to another function which receives two ints
// and it returns another int
int (*functionFactory(int n))(int, int) {
printf("Got parameter %d", n);
int (*functionPtr)(int,int) = &addInt;
return functionPtr;
}
But it's much nicer to use a typedef:
typedef int (*myFuncDef)(int, int);
// note that the typedef name is indeed myFuncDef
myFuncDef functionFactory(int n) {
printf("Got parameter %d", n);
myFuncDef functionPtr = &addInt;
return functionPtr;
}
OpenGL colors
glColor3f(0.0, 0.0, 0.0); // black
glColor3f(1.0, 0.0, 0.0); // red
glColor3f(0.0, 1.0, 0.0); // green
glColor3f(1.0, 1.0, 0.0); // yellow
glColor3f(0.0, 0.0, 1.0); // blue
glColor3f(1.0, 0.0, 1.0); // magenta
glColor3f(0.0, 1.0, 1.0); // cyan
glColor3f(1.0, 1.0, 1.0); // white
OpenGL functions reference Fixed Functional Pipeline vs Programmable 3.3 core
/*
* If you want to use any of these old functions from the
* fixed function pipeline you can use TinyGL or Mesa
* for software rendering version or implement them as wrappers
* around modern OpenGL
void glBegin(GLenum);
void glClear(GLbitfield);
void glClearColor(GLclampf, GLclmapf, GLclampf, GLclampf);
void glColor3f(GLfloat, GLfloat, GLfloat);
void glColor4f(GLfloat, GLfloat, GLfloat, GLfloat);
void glCullFace(GLenum);
void glDisable(GLenum);
void glEnable(GLenum);
void glEnd(void);
void glFrustum(GLdouble, GLdouble, GLdouble, GLdouble, GLdouble, GLdouble);
GLubyte *glGetString(GLenum);
void glLoadIdentity(void);
void glMatrixMode(GLenum);
void glRotatef(GLfloat, GLfloat, GLfloat, GLfloat);
void glRotated(GLdouble, GLdouble, GLdouble, GLdouble);
void glScalef(GLfloat, GLfloat, GLfloat);
void glScaled(GLdouble, GLdouble, GLdouble);
void glScissor(GLint, GLint, GLsizei, GLsizei);
void glTexCoord1f(GLfloat);
void glTexCoord2f(GLfloat, GLfloat);
void glTexCoord3f(GLfloat, GLfloat, GLfloat);
void glTexCoord4f(GLfloat, GLfloat, GLfloat, GLfloat);
void glTexCoord1d(GLdouble);
void glTexCoord2d(GLdouble, GLdouble);
void glTexCoord3d(GLdouble, GLdouble, GLdouble);
void glTexCoord4d(GLdouble, GLdouble, GLdouble, GLdouble);
void glTexImage2D(GLenum, GLint, GLint, GLsizei, GLsizei, GLint, GLenum, GLenum, const GLvoid*);
void glTexSubImage2D(GLenum GLint, GLint, GLint, GLsizei, GLsizei, GLenum, GLenum, const GLvoid*);
void glTranslatef(GLfloat, GLfloat, GLfloat);
void glTranslated(GLdouble, GLdouble, GLdouble);
void glVertex2f(GLfloat, GLfloat);
void glVertex3f(GLfloat, GLfloat, GLfloat);
void glVertex4f(GLfloat, GLfloat, GLfloat);
void glViewport(GLint, GLint, GLsizei, GLsizei);
*/
/* 3.3 core
void glActiveTexture(GLenum);
void glAttachShader(GLuint, GLuint);
void glBeginConditionalRender(GLuint, GLenum);
void glBeginQuery(GLenum, GLuint);
void glBeginTransformFeedback(GLenum);
void glBindAttribLocation(GLuint, GLuint, const GLchar*);
void glBindBuffer(GLenum, GLuint);
void glBindBufferBase(GLenum, GLuint, GLuint);
void glBindBufferRange(GLenum, GLuint, GLuint, GLintptr, GLsizeiptr);
void glBindFragDataLocation(GLuint, GLuint, const char*);
void glBindFragDataLocationIndexed(GLuint, GLuint, GLuint, const char*);
void glBindFramebuffer(GLenum, GLuint);
void glBindRenderbuffer(GLenum, GLuint);
void glBindSampler(GLuint, GLuint);
void glBindTexture(GLenum, GLuint);
void glBindVertexArray(GLuint);
void glBlendColor(GLclampf, GLclampf, GLclampf, GLclampf);
void glBlendEquation(GLenum);
void glBlendEquationSeparate(GLenum, GLenum);
void glBlendFunc(GLenum, GLenum);
void glBlendFuncSeparate(GLenum, GLenum, GLenum, GLenum);
void glBlitFramebuffer(GLint, GLint, GLint, GLint, GLint, GLint, GLint, GLint, GLbitfield, GLenum);
void glBufferData(GLenum, GLsizeiptr, const GLvoid*, GLenum);
void glBufferSubData(GLenum, GLintptr, GLsizeiptr, const GLvoid*);
void glCheckFramebufferStatus(GLenum);
void glClampColor(GLenum, GLenum);
void glClear(GLbitfield);
void glClearBufferiv(GLenum, GLint, const GLint*);
void glClearBufferuiv(GLenum, GLint, const GLint*);
void glClearBufferfv(GLenum, GLint, const GLfloat*);
void glClearBufferfi(GLenum, GLint, GLfloat, GLint);
void glClearColor(GLclampf, GLclampf, GLclampf, GLclampf);
void glClearDepth(GLclampd);
void glClearStencil(GLint);
void glCLientWaitSync(GLsync, GLbitfield, GLuint64);
void glColorMask(GLboolean, GLboolean, GLboolean, GLboolean);
void glCompileShader(GLuint);
void glCompressedTexImage1D(GLenum, GLint, GLsizei, GLint, GLsizei, const GLvoid*);
void glCompressedTexImage2D(GLenum, GLint, GLenum, GLsizei, GLsizei, GLint, GLsizei, const GLvoid*);
void glCompressedTexImage3D(GLenum, GLint, GLenum, GLsizei, GLsizei, GLsizei, GLint, GLsizei, const GLvoid*);
void glCompressedTexSubImage1D(GLenum, GLint, GLint, GLsizei, GLenum, GLsizei, const GLvoid*);
*/
Source: http://www.songho.ca/opengl/gl_vbo.html
// unit cube
// A cube has 6 sides and each side has 4 vertices, therefore, the total number
// of vertices is 24 (6 sides * 4 verts), and 72 floats in the vertex array
// since each vertex has 3 components (x,y,z) (= 24 * 3)
// v6----- v5
// /| /|
// v1------v0|
// | | | |
// | v7----|-v4
// |/ |/
// v2------v3
// vertex position array
GLfloat vertices[] = {
.5f, .5f, .5f, -.5f, .5f, .5f, -.5f,-.5f, .5f, .5f,-.5f, .5f, // v0,v1,v2,v3 (front)
.5f, .5f, .5f, .5f,-.5f, .5f, .5f,-.5f,-.5f, .5f, .5f,-.5f, // v0,v3,v4,v5 (right)
.5f, .5f, .5f, .5f, .5f,-.5f, -.5f, .5f,-.5f, -.5f, .5f, .5f, // v0,v5,v6,v1 (top)
-.5f, .5f, .5f, -.5f, .5f,-.5f, -.5f,-.5f,-.5f, -.5f,-.5f, .5f, // v1,v6,v7,v2 (left)
-.5f,-.5f,-.5f, .5f,-.5f,-.5f, .5f,-.5f, .5f, -.5f,-.5f, .5f, // v7,v4,v3,v2 (bottom)
.5f,-.5f,-.5f, -.5f,-.5f,-.5f, -.5f, .5f,-.5f, .5f, .5f,-.5f // v4,v7,v6,v5 (back)
};
// normal array
GLfloat normals[] = {
0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, // v0,v1,v2,v3 (front)
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v0,v3,v4,v5 (right)
0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, // v0,v5,v6,v1 (top)
-1, 0, 0, -1, 0, 0, -1, 0, 0, -1, 0, 0, // v1,v6,v7,v2 (left)
0,-1, 0, 0,-1, 0, 0,-1, 0, 0,-1, 0, // v7,v4,v3,v2 (bottom)
0, 0,-1, 0, 0,-1, 0, 0,-1, 0, 0,-1 // v4,v7,v6,v5 (back)
};
// colour array
GLfloat colors[] = {
1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, // v0,v1,v2,v3 (front)
1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, // v0,v3,v4,v5 (right)
1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, // v0,v5,v6,v1 (top)
1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, // v1,v6,v7,v2 (left)
0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, // v7,v4,v3,v2 (bottom)
0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1 // v4,v7,v6,v5 (back)
};
// texture coord array
GLfloat texCoords[] = {
1, 0, 0, 0, 0, 1, 1, 1, // v0,v1,v2,v3 (front)
0, 0, 0, 1, 1, 1, 1, 0, // v0,v3,v4,v5 (right)
1, 1, 1, 0, 0, 0, 0, 1, // v0,v5,v6,v1 (top)
1, 0, 0, 0, 0, 1, 1, 1, // v1,v6,v7,v2 (left)
0, 1, 1, 1, 1, 0, 0, 0, // v7,v4,v3,v2 (bottom)
0, 1, 1, 1, 1, 0, 0, 0 // v4,v7,v6,v5 (back)
};
// index array for glDrawElements()
// A cube requires 36 indices = 6 sides * 2 tris * 3 verts
GLuint indices[] = {
0, 1, 2, 2, 3, 0, // v0-v1-v2, v2-v3-v0 (front)
4, 5, 6, 6, 7, 4, // v0-v3-v4, v4-v5-v0 (right)
8, 9,10, 10,11, 8, // v0-v5-v6, v6-v1-v0 (top)
12,13,14, 14,15,12, // v1-v6-v7, v7-v2-v1 (left)
16,17,18, 18,19,16, // v7-v4-v3, v3-v2-v7 (bottom)
20,21,22, 22,23,20 // v4-v7-v6, v6-v5-v4 (back)
};
More copied from ProtableGL: https://github.com/rswinkle/PortableGL/blob/master/src/gl_impl_unsafe.c
// Stubs to let real OpenGL libs compile with minimal modifications/ifdefs
// add what you need
void glGetDoublev(GLenum pname, GLdouble* params) { }
void glGetInteger64v(GLenum pname, GLint64* params) { }
void glGetProgramiv(GLuint program, GLenum pname, GLint* params) { }
void glGetProgramInfoLog(GLuint program, GLsizei maxLength, GLsizei* length, GLchar* infoLog) { }
void glAttachShader(GLuint program, GLuint shader) { }
void glCompileShader(GLuint shader) { }
void glGetShaderInfoLog(GLuint shader, GLsizei maxLength, GLsizei* length, GLchar* infoLog) { }
void glLinkProgram(GLuint program) { }
void glShaderSource(GLuint shader, GLsizei count, const GLchar** string, const GLint* length) { }
void glGetShaderiv(GLuint shader, GLenum pname, GLint* params) { }
void glDeleteShader(GLuint shader) { }
void glDetachShader(GLuint program, GLuint shader) { }
GLuint glCreateProgram() { return 0; }
GLuint glCreateShader(GLenum shaderType) { return 0; }
GLint glGetUniformLocation(GLuint program, const GLchar* name) { return 0; }
GLint glGetAttribLocation(GLuint program, const GLchar* name) { return 0; }
GLboolean glUnmapBuffer(GLenum target) { return GL_FALSE; }
GLboolean glUnmapNamedBuffer(GLuint buffer) { return GL_FALSE; }
// TODO
void glLineWidth(GLfloat width) { }
void glActiveTexture(GLenum texture) { }
void glTexParameterfv(GLenum target, GLenum pname, const GLfloat* params) { }
void glUniform1f(GLint location, GLfloat v0) { }
void glUniform2f(GLint location, GLfloat v0, GLfloat v1) { }
void glUniform3f(GLint location, GLfloat v0, GLfloat v1, GLfloat v2) { }
void glUniform4f(GLint location, GLfloat v0, GLfloat v1, GLfloat v2, GLfloat v3) { }
void glUniform1i(GLint location, GLint v0) { }
void glUniform2i(GLint location, GLint v0, GLint v1) { }
void glUniform3i(GLint location, GLint v0, GLint v1, GLint v2) { }
void glUniform4i(GLint location, GLint v0, GLint v1, GLint v2, GLint v3) { }
void glUniform1ui(GLuint location, GLuint v0) { }
void glUniform2ui(GLuint location, GLuint v0, GLuint v1) { }
void glUniform3ui(GLuint location, GLuint v0, GLuint v1, GLuint v2) { }
void glUniform4ui(GLuint location, GLuint v0, GLuint v1, GLuint v2, GLuint v3) { }
void glUniform1fv(GLint location, GLsizei count, const GLfloat* value) { }
void glUniform2fv(GLint location, GLsizei count, const GLfloat* value) { }
void glUniform3fv(GLint location, GLsizei count, const GLfloat* value) { }
void glUniform4fv(GLint location, GLsizei count, const GLfloat* value) { }
void glUniform1iv(GLint location, GLsizei count, const GLint* value) { }
void glUniform2iv(GLint location, GLsizei count, const GLint* value) { }
void glUniform3iv(GLint location, GLsizei count, const GLint* value) { }
void glUniform4iv(GLint location, GLsizei count, const GLint* value) { }
void glUniform1uiv(GLint location, GLsizei count, const GLuint* value) { }
void glUniform2uiv(GLint location, GLsizei count, const GLuint* value) { }
void glUniform3uiv(GLint location, GLsizei count, const GLuint* value) { }
void glUniform4uiv(GLint location, GLsizei count, const GLuint* value) { }
void glUniformMatrix2fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix3fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix4fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix2x3fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix3x2fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix2x4fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix4x2fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix3x4fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
void glUniformMatrix4x3fv(GLint location, GLsizei count, GLboolean transpose, const GLfloat* value) { }
#ifdef _WIN32
#include <windows.h>
extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 0x00000001;
}
// Or on one line each: extern "C" __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
#endif
#ifdef _MSC_VER // Check if MS Visual C compiler
# include <windows.h> // Only include the windows headers if compiling for Windows
# pragma comment(lib, "opengl32.lib") // Compiler-specific directive to avoid manually configuration
# pragma comment(lib, "glu32.lib") // Link libraries
#endif
#define LOWORD(l) ((WORD)(((DWORD_PTR)(l)) & 0xffff))
The HIWORD macro is defined as:
#define HIWORD(l) ((WORD)((((DWORD_PTR)(l)) >> 16) & 0xffff))
Test if window is in focus and is active, otherwise this is where to handle pausing
case WM_ACTIVATE: // Watch For Window Activate Message
{
if (!HIWORD(wParam)) // Check Minimization State
{
active=TRUE; // Program Is Active
}
else
{
active=FALSE; // Program Is No Longer Active
}
return 0; // Return To The Message Loop
}
case WM_SYSCOMMAND: //Intercept System Commands
{
switch (wParam) //Check System Calls
{
case SC_SCREENSAVE: //Screensaver Trying To Start?
case SC_MONITORPOWER: //Monitor Trying To Enter Powersave?
return 0; //Prevent From Happening
}
break; //Exit
}
bool keys[256]; //Array Used For The Keyboard Routine
bool active=TRUE; //Window Active Flag Set To TRUE By Default
...
case WM_KEYDOWN: //Is A Key Being Held Down?
{
keys[wParam] = TRUE; //If So, Mark It As TRUE
return 0; //Jump Back
}
case WM_KEYUP: //Has A Key Been Released?
{
keys[wParam] = FALSE; //If So, Mark It As FALSE
return 0; //Jump Back
}
#ifdef _MSC_VER // Check if MS Visual C compiler
# ifndef _MBCS
# define _MBCS // Uses Multi-byte character set
# endif
# ifdef _UNICODE // Not using Unicode character set
# undef _UNICODE
# endif
# ifdef UNICODE
# undef UNICODE
# endif
#endif
Endianess conversion in GCC compilers (Source: r/TheDefault8 in https://www.reddit.com/r/opengl/comments/pr9zlh/integer_texture_not_rendering/)
#if (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) || \
(defined(BYTE_ORDER) && BYTE_ORDER == LITTLE_ENDIAN) || \
defined(__LITTLE_ENDIAN__) || \
defined(_MSC_VER) || \
defined(__ARMEL__) || \
defined(__THUMBEL__) || \
defined(__AARCH64EL__) || \
defined(_MIPSEL) || defined(__MIPSEL) || defined(__MIPSEL__)
# define BO_LITTLE_ENDIAN 1
#else
# define BO_LITTLE_ENDIAN 0
#endif
uint32_t htonl(uint32_t x)
{
#if BO_LITTLE_ENDIAN
uint8_t *s = (uint8_t *)&x;
return (uint32_t)(s[0] << 24 | s[1] << 16 | s[2] << 8 | s[3]);
#else
return x;
#endif
}
uint16_t htons(uint16_t x)
{
#if BO_LITTLE_ENDIAN
uint8_t *s = (uint8_t *)&x;
return (uint16_t)(s[0] << 8 | s[1]);
#else
return x;
#endif
}
Big endianess for older GCC compilers:
You may try __BIG_ENDIAN__ or __BIG_ENDIAN or _BIG_ENDIAN which are often defined on big endian compilers.
This will improve detection. But if you specifically target PowerPC platforms, you can add a few more tests to improve even more detection. Try _ARCH_PPC or __PPC__ or __PPC or PPC or __powerpc__
or __powerpc or even powerpc. Bind all these defines together, and you have a pretty fair chance to detect big endian systems, and powerpc in particular, whatever the compiler and its version.
Contra questions (Source: https://stackoverflow.com/questions/8978935/detecting-endianness)
Instead of looking for a compile-time check, why not just use big-endian order (which is considered the "network order" by many) and use the htons/htonl/ntohs/ntohl functions provided by
most UNIX-systems and Windows. They're already defined to do the job you're trying to do. Why reinvent the wheel?
Boost-like C alternatives
https://docs.gtk.org/glib/
http://apr.apache.org/
https://www.hpl.hp.com/personal/Hans_Boehm/gc/
Collision detection
auto verts = sceneObject->getEditVerts();
auto norms = sceneObject->getEditNorms();
auto triangleInds = mesh->getCount();
int numTrid = triangleInds->getCount();
uint3 *triData = (uint3 *)triangleInds->getData();
float3 *vertsData = *(float3 *)verts->getData();
float3 *normsData = *(float3 *)norms->getData();
// for each triangle in the collision geometry
for (int i = 0; i < numTris; i++) {
bool outsidePlane = false;
bool outsideAllVerts = false;
bool outsideAllEdges = false;
float v1 = vertsData[triData[i].x];
float v2 = vertsData[triData[i].y];
float v3 = vertsData[triData[i].z];
// Assume flat normals for collision (all 3 n would be the same)
float pN = (float4)(normsData[triData[i].x].normalized(), 0.0f)).xyz();
// only test for vertical polygons
if (fabs(pN.y) > 0.1f)
continue;
float d = -((v1 + v2 + v3) / 3.0f).dot(pN);
// get point-to-plane distance from model center
float ppd = pN.dot(collSphereOrigin) + d;
if (ppd > collSphereRadius)
{
outsidePlane = true;
continue;
}
}
This is a rather simple formula for point-plance distance with plance equation
Ax+By+Cz+0=e
Distance = (A*x0+B*y0+C*z0+D)/sqrt(A*A+B*B+C*C)
where (x0, y0, z0) are point coordinates. If your plane is normal vector (A, B, C) is normalized (unit), then denominator may be omitted.
(A sign of distance usually is not important for interesection purposes
static bool intersectRaySegmentSphere(float 3 0, float3 d, float3 so, float radius2, float &ip)
{
// we pass in d non-normalized to keep it's length
// then we use the length later to compare the intersection point to make sure
// we're within the actual ray segment
float l = d.length();
d /= l;
float3 m = o - so;
float b = m.dot(d);
float c = m.dot(m) - radius2;
// Exit if r's origin outside s (c > 0) and r poiting away from s (b > 0)
if (c > 0.0f && b > 0.0f)
return false;
float discr = b * b - c;
// A negative discriment corresponds to ray missing sphere
if (discr < 0.0f)
return false;
// Ray now found to intersect sphere, compute smallest t value of intersection
float t = -b - sqrtf(discr);
// If t is negative, ray started inside shpere so clamp t to zero
if (t < 0.0f)
t = 0.0f;
ip = o + (d * t);
if (t > 1)
return false;
return true;
}
bool outsideV1 = ((v1-collSphereOrigin).lengthSquared() > collSphereRadius2));
bool outsideV2 = ((v2-collSphereOrigin).lengthSquared() > collSphereRadius2));
bool outsideV3 = ((v3-collSphereOrigin).lengthSquared() > collSphereRadius2));
if (outsideV1 && outsideV2 && outsideV3) {
// Sphere outside of all triangle vertices
outsideAllVerts = true;
}
// build 3 rays (line segments)
float3 a = v2-v1;
float3 b = v3-v2;
float3 c = v1-v3;
float3 ip;
if(!intersectRaySegmentSphere(v1, a, collSphereOrigin, collSphereRadius2, ip) &&
!intersectRaySegmentSphere(v2, b, collSphereOrigin, collSphereRadius2, ip) &&
!intersectRaySegmentSphere(v3, c, collSphereOrigin, collSphereRadius2, ip))
{
outsideAllEdges = true;
}
if (outsideAllVerts && outsideAllEdges) {
continue;
}
sceneObject->getMeshes()[0]->getMaterial()->setDiffuse(float4(1, 0, 0, 1));
// push the character (us) outside of the intersected body
shiftDelta += pN*(collSphereRadius-ppd);
numCollisions++;
// This is on indention lower then the above code (in the functions global scope)
if (numCollisions != 0) {
shiftDelta /= (float)numCollisions;
if (shiftDelta.length() > lastWalkSpeed)
{
shiftDelta = shiftDelta.normalized();
shiftDelta *= lastWalkSpeed*1.1f;
}
}
model->setPos(model->getPos() + shiftDelta);
One of the simplest formulas or expressions possible is the cosine of a linear argument. Popular wisdom (especially between old-school coders) is that trigonometric functions are expensive and that
therefore it is important to avoid them (by means of LUTs or linear/triangular approximations). Often popular wisdom is wrong - despite the above still holds true in some especial cases (a CPU heavy
inner loop) it does not in general: for example, in the GPU, computing a cosine is way, way faster than any attempt to approximate it. So, lets take advantage of this and go with the straight cosine expresion:
color(t) = a + b * cos[2PI(c * t + d)]
Example:
// cosine based palette, 4 vec3 params
vec3 palette( in float t, in vec3 a, in vec3 b, in vec3 c, in vec3 d )
{
return a + b*cos( 6.28318*(c*t+d) );
}
r/Lumornys
If you're going to cover both fixed-function and programmable pipeline, you can make a sort of history lesson on what was introduced when - glBegin/glEnd in 1.0, arrays in 1.1, VBO in 1.5, shaders in 2.0
(with slightly different syntax), then deprecating old stuff in 3.0, etc.
There seem to be misconceptions in some tutorials about what is "old" and what is "new", as if before "modern" OpenGL 3.0 there was nothing but 1.0. In fact, the transition was a lot more gradual.
├── src
│ ├── base
│ │ └── render.ts
│ ├── core
│ │ ├── camera
│ │ │ └── camera.ts
│ │ ├── cutscene
│ │ │ └── cutscene.ts
│ │ ├── debug
│ │ │ └── debug.ts
│ │ ├── gameobjects
│ │ │ ├── circle.ts
│ │ │ ├── gameObject.ts
│ │ │ ├── rect.ts
│ │ │ ├── roundrect.ts
│ │ │ └── sprite.ts
│ │ ├── game.ts
│ │ ├── group
│ │ │ └── group.ts
│ │ ├── input
│ │ │ └── input.ts
│ │ ├── interactive
│ │ │ └── text.ts
│ │ ├── lights
│ │ │ └── staticLight.ts
│ │ ├── loader
│ │ │ └── loader.ts
│ │ ├── map
│ │ │ └── tilemap.ts
│ │ ├── math
│ │ │ └── clamp.ts
│ │ ├── particles
│ │ │ ├── particleEmitter.ts
│ │ │ └── particle.ts
│ │ ├── physics
│ │ │ ├── circleToRectIntersect.ts
│ │ │ ├── collider.ts
│ │ │ └── rectToRectIntersect.ts
│ │ ├── scene.ts
│ │ ├── sound
│ │ │ └── sound.ts
│ │ └── storage
│ │ └── storage.ts
│ ├── helper
│ │ └── color
│ │ ├── getValuesHSL.ts
│ │ ├── getValuesRGB.ts
│ │ ├── hexToHSL.ts
│ │ ├── hexToRGBA.ts
│ │ ├── hexToRGB.ts
│ │ ├── hslaToRGBA.ts
│ │ ├── hslToRGB.ts
│ │ ├── isHex.ts
│ │ ├── isHSL.ts
│ │ ├── isRGB.ts
│ │ ├── randomColor.ts
│ │ ├── rgbaToHSLA.ts
│ │ ├── rgbaToRGB.ts
│ │ ├── rgbToHSL.ts
│ │ └── rgbToRGBA.ts
│ ├── index.ts
│ └── utils
│ ├── randomInt.ts
│ └── validURL.ts
// update VBO for each character
float vertices[6][4] = {
{ xpos, ypos + h, 0.0f, 0.0f },
{ xpos, ypos, 0.0f, 1.0f },
{ xpos + w, ypos, 1.0f, 1.0f },
{ xpos, ypos + h, 0.0f, 0.0f },
{ xpos + w, ypos, 1.0f, 1.0f },
{ xpos + w, ypos + h, 1.0f, 0.0f }
};
Stupid SSAO Question (self.opengl)
I am still a newbie in OpenGL. I am currently learning different Post Processing effects. So far I have done the Bloom effect and now I am learning about SSAO.
I am doing forward rendering and most of the SSAO tutorials I have found are done in deferred rendering. I found one blog which shows to do the same in forward rendering using only Depth texture.
My stupid question is, is SSAO calculation done after all the lighting calculations and rendering is done or before main lighting calculations? If it is done after, how to combine it in the final texture?
I am confused, because SSAO is a part of post process so I think it is done after all lighting is done.
Sorry for my stupidity. Thanks.
---
corysama 2 poeng 3 timer siden
Common practice with forward rendering is to do a depth prepass of your scene. Then use that depth buffer to generate a SSAO full-screen texture. Then use that during your forward lighting pass to modulate the ambient light.
Bonus points if you can schedule the SSAO convolution as a compute shader that runs in parallel with shadow map rasterization.
r/Eklundz
My thoughts on good quest design:
Meaningful: no “kill 5 rats” quests, proper stuff.
Multiple solutions: This is something 99% of all games fail at. A good quest need to have 2-3 different solutions, otherwise it will just feel like a mandatory railroad.
Challenging: If a quest is not challenging its pointless.
It should lead to something: A good quest leads to something new, it’s not just an isolated event. There are a few different examples here, World Of Warcraft have quests the devs call
“bread crumb quests” that lead the player to a new zone by giving them a letter or something to deliver to the captain of the guard in a fort in the next zone, or something similar.
These quests might not be challenging but they feel meaningful because you start a journey and go to explore new things. Another example would be a quest that changes the state of the world,
like siege quests in Skyrim where the city changes ruler after you finish it.
Those are my thoughts.
r/therooseisloose578
...
Most of the time you can break quests into two things. The actions that the player needs to take in order to complete the quest, and the narrative context/wrapper to the quest. For the actions,
that relates to the "puzzle" aspect that you are talking about. Some people like puzzles so then that would be a good quest for them. The narrative context relates to pretty much everything else
your talking about. So that's the story, and why you're doing what you're doing. What you refer to as "player expression" is actually commonly referred to as "player agency" and studies have shown
that players enjoy quests more if there is high percieved player agency.
...
r/Xolarix
...
Very rarely is it a singular quest, most of them are actually questlines, where you complete multiple tasks, and there is an overarching story in the questline. And it's not just a single side
quest that branches off from the main story, but there are literally dozens of side quests and each of them take like 15-30 minutes to complete, which is why that game is so immersive.
...
So I'd say it's a combination of good writing, and interesting unique mechanics for quests that force the player to do something new, or use the knowledge they already had in a new way,
or where using that mechanic doesn't have the expected outcome for that quest.
...
https://www.reddit.com/r/raytracing/comments/155tsky/homemade_raytracer_in_c/
harieamjari[S] 6 poeng 3 dager siden*
https://github.com/harieamjari/raytracer. Requires no libraries. Outputs an rgba 8 bit-depth pixel. Image can be constructed by piping the output to your desired software:
./main | ffmpeg -f rawvideo -pix_fmt rgba -s 640x480 -i pipe:0 ray.png -y
~For some reason, gcc generated executable crashes, but it works on clang with -O0 enabled (linux-x86-64).~ This bug was present in commit https://github.com/harieamjari/raytracer/tree/4410d3e67c1ed52cfa30b6108adc2cd72410eab0. A fix has been committed thanks to /u/skeeto.
~Written on my phone where this bug doesn't appear on my phone's compiler (aarch64).~
permalenkeembedlagrerapportergive awardsvar
[–]skeeto 5 poeng 2 dager siden
Nice job, that looks neat!
You really ought to put function prototypes in headers so that they're consistent between translation units. Otherwise it's easy to mismatch. For instance, the prototype for compute_snormal doesn't match the definition, and before I fixed it, it would simply crash.
--- a/math.c
+++ b/math.c
@@ -93,6 +93,6 @@ float magnitude(vec3D v){
-vec3D compute_snormal(triangle3D *triangle){
- vec3D A = *triangle->vertices[0];
- vec3D B = *triangle->vertices[1];
- vec3D C = *triangle->vertices[2];
+vec3D compute_snormal(triangle3D triangle){
+ vec3D A = *triangle.vertices[0];
+ vec3D B = *triangle.vertices[1];
+ vec3D C = *triangle.vertices[2];
It's trivial to skip the ffmpeg step and use Netpbm as your output format, which is supported by most image viewers. You just need to add a header and drop the alpha channel:
--- a/main.c
+++ b/main.c
@@ -162,2 +162,3 @@ int main(int argc, char *argv[]){
+ printf("P6\n%d %d\n255\n", image_width, image_height);
for (int y = 0; y < image_height; y++)
@@ -167,3 +168,2 @@ int main(int argc, char *argv[]){
putchar(img_buf[y][x].b);
- putchar(img_buf[y][x].a);
rand is okay for quick toy programs, but for anything else it's poor. It has implicit global state, and so is (usually) wrapped in a lock… in the best case. This is a giant contention point that kills multithreaded performance, such as that commented-out OpenMP pragma. It would be better to thread a PRNG state down your call stack with a per-thread state. For example, I added a PRNG parameter to montc_ray, and embedded an LCG:
--- a/utils.c
+++ b/utils.c
@@ -44,7 +44,11 @@ uint8_t u8_getb(rgba_t rgba){
// generate monte carlo ray
-vec3D montc_ray(vec3D norm){
+vec3D montc_ray(vec3D norm, uint64_t *rng){
+ *rng = *rng*0x3243f6a8885a308d + 1;
+ uint16_t x = *rng >> 48;
+ uint16_t y = *rng >> 32;
+ uint16_t z = *rng >> 16;
vec3D randv = normalize((vec3D){
- (float)(1000 - rand()%2000),
- (float)(1000 - rand()%2000),
- (float)(1000 - rand()%2000)
+ x/(float)0x8000 - 1,
+ y/(float)0x8000 - 1,
+ z/(float)0x8000 - 1,
});
Then threaded that through all call sites, also adding the new parameter to get_pixel and shoot_ray. This required modifying the same prototypes in multiple places due to them not being headers where they belong — the second time that caused issues. I initialized the state in the top-level loop from the loop variable, which, with OpenMP restored, effectively makes it thread-local:
--- a/main.c
+++ b/main.c
-//#pragma omp parallel for
- for (int y = 0; y < image_height; y++)
+ #pragma omp parallel for
+ for (int y = 0; y < image_height; y++) {
+ uint64_t rng = y + 1;
for (int x = 0; x < image_width; x++){
- rgba_t rgba = get_pixel(x - (image_width/2), (image_height/2) - y);
+ rgba_t rgba = get_pixel(x - (image_width/2), (image_height/2) - y, &rng);
img_buf[y][x].r = (uint8_t)(rgba.r*255.0);
@@ -160,2 +160,3 @@ int main(int argc, char *argv[]){
}
+ }
Even just single threaded, using a better PRNG makes it about 25% faster on my system, and even better with lots of threads because it eliminates the aforementioned contention.
maep 16 poeng 4 timer siden*
Hi, (ex) professional audio dev here. Unless you work with embedded audio, the common practice is to convert your samples to float, normalized to [-1, 1]. That makes writing filters much easier and
you no longer have to think about scaling issues, but be warned that you might run into denormals.
When mixing two channels in s16 you could to a right-shift (which acts as x * 0.5) before to avoid clipping. The problem with that method is that it doesn't preserverve loudness. The correct approach
for uncorrelated signals is multiplying with 1/sqrt(2) [1] which approximates to (x * 0.7), but then you might get clipping.
Most fixed point DSPs support saturation arithmetic which deals with clipping issues, but on desktop CPUs you have to do that manually.
With all that being said, to mix two uncorrelated 16-bit samples, a and b, in fixed-point:
int16_t mix(int16_t a, int16_t b) {
// mix samples
int32_t x = (int32_t)a + (int32_t)b;
// multiply with 1/sqrt(2) in Q15 [2]
// x = (x * 23170) >> 15; // using arithmeric right shift
// u/richardxday pointed out that ARS is not ideal, and afaik implementation defined
// div is better and on modern compilers equally fast
x = (x * 23170) / 32768;
// clip output
x = x < INT16_MIN ? INT16_MIN :
x > INT16_MAX ? INT16_MAX : x;
return (int16_t)x;
}
As you can see in compiler explorer, the div is turned into a sar instruction: https://godbolt.org/z/3sohca9oj
If you want to get deeper into this topic, I highly recommend reading this free book: http://www.dspguide.com/pdfbook.htm
richardxday 5 poeng 2 timer siden
Just a small point, I'd recommend avoiding ASR's if possible because they round asymmetrically (towards -ve infinity). A right shift is equivalent to a divide for positive numbers and not equivalent for negative numbers.
For example:
5 >> 1 = 2
-5 >> 1 = -3
This means you'll get different results for your sqrt(.5) gain for +ve and -ve numbers.
I'd suggest sticking with a divide unless performance *requires* a faster method.
permalenkeembedlagreforeldrerapportergive awardsvar
[–]maep 3 poeng 2 timer siden
Good point. I also skipped rounding (adding 0.5 or 16384 in Q15) to keep it a bit simpler.
The thing is, we're talking about the least significant bit here, which is inaudable noise at -96 dB. For all practical intents and purpose this is good enough.
Jorengarenar 17 poeng 9 timer siden*
Sint16
Why aren't you using standard's fixed width integers from stdint.h header?
Is it possible the sign bit is getting discarded doing it this way?
32-bit integer -32768 would be represented in binary using two's complement as:
1111 1111 1111 1111 1000 0000 0000 0000
If we now "cut" it down to 16-bit we get: 1000 0000 0000 0000, which is still -32768
permalenkeembedlagrerapportergive awardsvar
[–]DeeBoFour20[S] 11 poeng 9 timer siden
Sint16 is from SDL. This code is in an SDL callback function so I'm just using that here.
I guess I'm good then regarding the cast to 16 bit. I assume that also holds true for any negative value larger than -32768?
permalenkeembedlagreforeldrerapportergive awardsvar
[–]Beliriel 2 poeng 6 timer siden
int32_t -65536 would get casted to int16_t 0 though if understood that right.
permalenkeembedlagreforeldrerapportergive awardsvar
[–]Selacios 4 poeng 6 timer siden
Yes, although officially signed downcasting in C is implementation-defined, so the actual behavior depends on the compiler.
permalenkeembedlagreforeldrerapportergive awardsvar
[–]Beliriel 0 poeng 5 timer siden
What about
((x >> 16) & -32768) & ( x & 32767)
x being an int32.
Then you can basically just cut off the top two bytes and still get the signed remainder < 65536 correctly no. Did I oversee something?
permalenkeembedlagreforeldrerapportergive awardsvar
[–]Selacios 1 poeng 2 timer siden
The minimum int32_t, -2^31, has binary
10000000000000000000000000000000
Using your suggested code, it would become
1000000000000000
by taking the bottom 15 bits of the original plus the sign bit of the original. This is actually min int16_t, -2^15.
However, these two actually have different signed remainders mod 65536 (the original has mod 0, the new number has mod -32768). I think what you're looking for is the simpler
x & 65535
which works because every bit above bit 15 represents a power of 2 which is 65536 or greater, hence does not contribute to the mod 65536 in any way.
[–]Beliriel 1 poeng 22 minutter siden
Mod can be negative? That's pretty funky. Yes my intention was casting it (or rather truncating the number) without losing the sign.
I know noise with an amplitude of -96dB FS doesn't sound like much (see what I did there?) but it is FS so may be a lot higher wrt the signal. Also, it's correlated noise so is distinctly less pleasant than
uncorrelated noise.
Finally, considering that dithering is often applied to 24 bit signals, 1 bit correlated noise on 16-bit signals is probably worthwhile doing something about, especially since it's so easy.
Granted, dithering is a whole other subject but my point is that for a simple change you can eliminate a source of noise.
As for rounding, a trick is to solve both problems at once with:
/*--------------------------------------------------------------------------------*/
/*
* Mix two uncorrelated 16-bit samples * * @param a sample a * @param b sample b *
* @return mix of a and b with a gain of -3dB, rounded to nearest (away from zero) /
/--------------------------------------------------------------------------------*/
int16_t mix(int16_t a, int16_t b) { // add samples, multiply by numerator of -3dB gain int32_t sample32 = ((int32_t)a + (int32_t)b) * 23170;
// bias the sample32 value to round to nearest (away from zero) *and*
// to compensate for -ve bias of ASR (biases in LSBs):
// sign | bias for ASR | bias for RTN | total bias
// +ve | 0 | .5 | .5
// -ve | .5 | -.5 | 0
// LSB in this case is 32768 so .5 * LSB = 16384
sample32 += (sample32 >= 0) ? 16384 : 0;
// perform ASR for denominator of -3dB gain (which will now be unbiased because of the above)
sample32 >>= 15;
// limit at 16-bit limits
sample32 = (sample32 < -32768) ? -32768 : ((sample32 > 32767) ? 32767 : sample32);
// return cast version
return (int16_t)sample32;
}
(I've just rustled up above so it's untested!)
permalenkeembedlagreforeldrerapportergive awardsvar
[–]capilot 1 poeng 2 timer siden
I didn't know about multiplying by √2 before, but I guess it makes sense.
I've also heard that dithering is a good idea; that is, adding white noise in [-16384 16384] before converting back to int16.
https://factualaudio.com/post/sum/
How exactly do modern widget toolkits (GTK+, QT) draw widgets?
jtsiomb 6 poeng en time siden There are two schools of thought when it comes to implementing widget toolkits. One is that every widget is a window, and the top-level window is a parent of a whole hierarchy. The other is that the toolkit creates only top-level windows, and everything else is drawin inside that top-level window by the toolkit itself. Older systems like Motif and Win32 "controls" follow the first approach. GTK and Qt are complicated. For GTK I think it depends on the "engine" you have selected. The default GTK "engine" I think also uses subwindows for widgets, but most of the shinier rounder-er engines are probably drawing everything on a pixmap. Rounded corners are not an issue with either way, but they do require support from the X server in the first case. That support is widely available however, and it's called the X shape extension. You can have even top-level windows with arbitrary shapes. And you can even change that shape on the fly. See my "shapeblobs" hack: https://github.com/jtsiomb/shapeblobs (video: https://www.youtube.com/watch?v=HwJhQEVdPOE) Shadows of top-level windows are independent of GUI toolkit. They are handled by the desktop compositor if one is running. If one isn't running you generally can't have shadows (or at least semi-transparent fuzzy shadows) on top-level windows, because you can't have alpha blending with the rest of the desktop. Shadows on widgets within a window require the second approach of drawing everything in the window by the toolkit. The compositor only touches top-level windows.
Physics Math: Spring suspension
Hooke's Law:
F=kx
"x" being the displacement and "k" being the multiplier for the strength of the spring.
Hooke's Law including dampning
F=kx-dv
"v" being the velocity and "d" being the multiplier for the strength of the damper
futurechiefexecutive til r/startups
kommenterdellagregjemgive awardrapportercrosspost
If you're anything like me, being more productive as a Founder is always an ongoing effort. Here's something I picked up a couple of months back that's helped me significantly:
Monday Morning
Start your day by writing down your answers to these three questions:
What are the top three things I want to accomplish this week?
What steps do I need to take to accomplish each of them?
What are the challenges or blockers that stand in my way?
Friday Afternoon
Come back to this and close the feedback loop with:
Compared to the priorities I set on Monday, how did I do?
What did I do well? What made me happy?
What didn't go as planned? Where can I improve?
You can do this on a notebook, Notion, or G-Docs if you're okay with manual upkeep. MyCheckins or Standuply work well as dedicated tools.
TL;DR - Plan each week, review it at the end, and improve the next week with lessons from the last.
Installing and setting up OpenAL with HRTF
Download OpenAL Soft https://openal-soft.org/ and unzip it to C:\libraries\
If you have 64-bit Windows:
Copy soft_oal.dll from the Win32 folder into C:\Windows\SysWOW64
Copy soft_oal.dll from the Win64 folder into C:\Windows\System32
If you have 32-bit Windows:
Copy soft_oal.dll from the Win32 folder into C:\Windows\System32
Enable HRTFs in OpenAL Soft
We need to create a configuration file that will tell OpenAL Soft to use HRTFs.
Open Notepad
Type the following:
hrtf = true
Click the File menu and Save As...
Type %APPDATA% and hit enter. It will automatically take you to the folder where we need to save this configuration file.
Change the Save as type drop-down list to say All files (*.*)
Type the File name as "alsoft.ini" and click Save.
Example using OpenAL with customer audio loading library in C++
#include
#include
#include
struct RIFF_Header {
char chunkID[4];
long chunkSize;
char format[4];
};
struct WAVE_Format {
char subChunkID[4];
long subChunkSize;
short audioFormat;
short numChannels;
long sampleRate;
long byteRate;
short blockAlign;
short bitsPerSample;
};
struct WAVE_Data {
char subChunkID[4];
long subChunk2Size;
};
bool loadWavFile(const char* filename, ALuint* buffer,
ALsizei* size, ALsizei* frequency,
ALenum* format) {
FILE* soundFile = NULL;
WAVE_Format wave_format;
RIFF_Header riff_header;
WAVE_Data wave_data;
unsigned char* data;
try {
soundFile = fopen(filename, "rb");
if (!soundFile)
throw (filename);
fread(&riff_header, sizeof(RIFF_Header), 1, soundFile);
if ((riff_header.chunkID[0] != 'R' ||
riff_header.chunkID[1] != 'I' ||
riff_header.chunkID[2] != 'F' ||
riff_header.chunkID[3] != 'F') &&
(riff_header.format[0] != 'W' ||
riff_header.format[1] != 'A' ||
riff_header.format[2] != 'V' ||
riff_header.format[3] != 'E'))
throw ("Invalid RIFF or WAVE Header");
fread(&wave_format, sizeof(WAVE_Format), 1, soundFile);
if (wave_format.subChunkID[0] != 'f' ||
wave_format.subChunkID[1] != 'm' ||
wave_format.subChunkID[2] != 't' ||
wave_format.subChunkID[3] != ' ')
throw ("Invalid Wave Format");
if (wave_format.subChunkSize > 16)
fseek(soundFile, sizeof(short), SEEK_CUR);
fread(&wave_data, sizeof(WAVE_Data), 1, soundFile);
if (wave_data.subChunkID[0] != 'd' ||
wave_data.subChunkID[1] != 'a' ||
wave_data.subChunkID[2] != 't' ||
wave_data.subChunkID[3] != 'a')
throw ("Invalid data header");
data = new unsigned char[wave_data.subChunk2Size];
if (!fread(data, wave_data.subChunk2Size, 1, soundFile))
throw ("error loading WAVE data into struct!");
*size = wave_data.subChunk2Size;
*frequency = wave_format.sampleRate;
if (wave_format.numChannels == 1) {
if (wave_format.bitsPerSample == 8 )
*format = AL_FORMAT_MONO8;
else if (wave_format.bitsPerSample == 16)
*format = AL_FORMAT_MONO16;
} else if (wave_format.numChannels == 2) {
if (wave_format.bitsPerSample == 8 )
*format = AL_FORMAT_STEREO8;
else if (wave_format.bitsPerSample == 16)
*format = AL_FORMAT_STEREO16;
}
alGenBuffers(1, buffer);
alBufferData(*buffer, *format, (void*)data,
*size, *frequency);
fclose(soundFile);
return true;
} catch(char* error) {
if (soundFile != NULL)
fclose(soundFile);
return false;
}
}
int main(){
//Sound play data
ALint state; // The state of the sound source
ALuint bufferID; // The OpenAL sound buffer ID
ALuint sourceID; // The OpenAL sound source
ALenum format; // The sound data format
ALsizei freq; // The frequency of the sound data
ALsizei size; // Data size
// Checking for error before alcMakeContextCurrent can cause a crash to happen...
ALCdevice* device = alcOpenDevice(NULL);
// You could still check whether the device was opened successfully using if( !device )
ALCcontext* context = alcCreateContext(device, NULL);
alcMakeContextCurrent(context);
// Create sound buffer and source
alGenBuffers(1, &bufferID);
alGenSources(1, &sourceID);
// Set the source and listener to the same location
alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f);
alSource3f(sourceID, AL_POSITION, 0.0f, 0.0f, 0.0f);
loadWavFile("..\\wavdata\\YOURWAVHERE.wav", &bufferID, &size, &freq, &format);
alSourcei(sourceID, AL_BUFFER, bufferID);
alSourcePlay(sourceID);
do{
alGetSourcei(sourceID, AL_SOURCE_STATE, &state);
} while (state != AL_STOPPED);
alDeleteBuffers(1, &bufferID);
alDeleteSources(1, &sourceID);
alcDestroyContext(context);
alcCloseDevice(device);
return 0;
}
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever
playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app
as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
---
I'm creating a context with
alcCreateContext(device, NULL).
The problem is that ALC_STEREO_SOURCES is 3 by default, so my program freezes if I try to reproduce more than 3 stereo sounds.
How can I set ALC_STEREO_SOURCES to 32?
You can specify context creation attributes by making an array of type ALCInt, containing ordered pairs of names and values.
So for example:
ALCInt myParams[3] = {ALC_STEREO_SOURCES, 32, 0};
alcCreateContex t(myDevice, myParams);
---
OpenAL applies attenuation only to mono sound.
---
I am writing a dialogue system for my game engine in C++. In order to group dialogue together I am having different dialogue sections placed within one file, and one buffer. Therefore how do I tell OpenAL to play the buffer from a
specific time (or sample it doesn't really matter to me) into the buffer. Thanks for any help in advance!
void PlayFromSpecifiedTime(ALfloat seconds) const
{
alSourcef(source, AL_SEC_OFFSET, seconds);
alSourcePlay(source);
}
Or, if you want to play from a certain sample from the buffer:
void PlayFromSpecifiedSample(ALint sample) const
{
alSourcei(source, AL_SAMPLE_OFFSET, sample);
alSourcePlay(source);
}
You can also add a check at the beginning to see if you're not trying to skip to a certain time (or sample) beyond the total amount from the buffer. If it does, you simply return; out of it. This assumes you know what you're doing.
---
I'm new using OpenAl library. I'm following the OpenAl programming guide but i can't find.
I have this code extracted from page 10 of the OpenAl programming guide but still have no sound. I use OSX Snow Leopard, i know OSX doesn't have ALUT defined.
#include
#include
#include
#include
#include
#include
#include
using namespace std;
#define NUM_BUFFERS 3
#define BUFFER_SIZE 4096
int main(int argc, char **argv)
{
ALCdevice *dev;
ALCcontext *ctx;
struct stat statbuf;
Aluint buffer[NUM_BUFFERS];
Aluint source[NUM_SOURCES];
ALsizei size, freq;
ALenum format;
ALvoid *data;
// Initialization
dev = alcOpenDevice(NULL); // select the "preferred dev"
if (dev)
{
ctx = alcCreateContext(dev,NULL);
alcMakeContextCurrent(ctx);
}
// Check for EAX 2.0 support
// g_bEAX = alIsExtensionPresent("EAX2.0");
// Generate Buffers
alGetError(); // clear error code
alGenBuffers(NUM_BUFFERS, buffer);
if ((error = alGetError()) != AL_NO_ERROR)
{
DisplayALError("alGenBuffers :", error);
return 1;
}
// Load test.wav
loadWAVFile("sample.wav", &format, &data, &size, &freq, &loop);
if ((error = alGetError()) != AL_NO_ERROR)
{
DisplayALError("LoadWAVFile sample.wav : ", error);
alDeleteBuffers(NUM_BUFFERS, buffer);
return 1;
}
// Copy test.wav data into AL Buffer 0
alBufferData(buffer[0], format, data, size, freq);
if ((error = alGetError()) != AL_NO_ERROR)
{
DisplayALError("alBufferData buffer 0 : ", error);
alDeleteBuffers(NUM_BUFFERS, buffer);
return 1;
}
// Unload test.wav
unloadWAV(format, data, size, freq);
if ((error = alGetError()) != AL_NO_ERROR)
{
DisplayALError("UnloadWAV : ", error);
alDeleteBuffers(NUM_BUFFERS, buffer);
return 1;
}
// Generate Sources
alGenSources(1, source);
if ((error = alGetError()) != AL_NO_ERROR)
{
DisplayALError("alGenSources 1 : ", error);
return 1;
}
// Attach buffer 0 to source
alSourcei(source[0], AL_BUFFER, buffer[0]);
if ((error = alGetError()) != AL_NO_ERROR)
{
DisplayALError("alSourcei AL_BUFFER 0 : ", error);
}
// Exit
ctx = alcGetCurrentContext();
dev = alcGetContextsDevice(ctx);
alcMakeContextCurrent(NULL);
alcDestroyContext(ctx);
alcCloseDevice(dev);
return 0;
}
What things I missed to make this code work ??? What i'm doing wrong ???
Any advice could help, thanks.
ANSWER: You are not calling alSourcePlay(source[0]) to start the playback.
ANSWER: People should also keep in mind that alSourcePlay() will execute asynchronously. So if you immediately clean up your audio resources after it returns, you probably won't hear anything at all before the program immediately exits.
---
include // OpenAL header files
#include
#include
using std::list;
#define FREQ 22050 // Sample rate
#define CAP_SIZE 2048 // How much to capture at a time (affects latency)
int main(int argC,char* argV[])
{
list bufferQueue; // A quick and dirty queue of buffer objects
ALenum errorCode=0;
ALuint helloBuffer[16], helloSource[1];
ALCdevice* audioDevice = alcOpenDevice(NULL); // Request default audio device
errorCode = alcGetError(audioDevice);
ALCcontext* audioContext = alcCreateContext(audioDevice,NULL); // Create the audio context
alcMakeContextCurrent(audioContext);
errorCode = alcGetError(audioDevice);
// Request the default capture device with a half-second buffer
ALCdevice* inputDevice = alcCaptureOpenDevice(NULL,FREQ,AL_FORMAT_MONO16,FREQ/2);
errorCode = alcGetError(inputDevice);
alcCaptureStart(inputDevice); // Begin capturing
errorCode = alcGetError(inputDevice);
alGenBuffers(16,&helloBuffer[0]); // Create some buffer-objects
errorCode = alGetError();
// Queue our buffers onto an STL list
for (int ii=0;ii<16;++ii) {
bufferQueue.push_back(helloBuffer[ii]);
}
alGenSources (1, &helloSource[0]); // Create a sound source
errorCode = alGetError();
short buffer[FREQ*2]; // A buffer to hold captured audio
ALCint samplesIn=0; // How many samples are captured
ALint availBuffers=0; // Buffers to be recovered
ALuint myBuff; // The buffer we're using
ALuint buffHolder[16]; // An array to hold catch the unqueued buffers
bool done = false;
while (!done) { // Main loop
// Poll for recoverable buffers
alGetSourcei(helloSource[0],AL_BUFFERS_PROCESSED,&availBuffers);
if (availBuffers>0) {
alSourceUnqueueBuffers(helloSource[0],availBuffers,buffHolder);
for (int ii=0;iiCAP_SIZE) {
// Grab the sound
alcCaptureSamples(inputDevice,buffer,CAP_SIZE);
//***** Process/filter captured data here *****//
//for (int ii=0;ii